Test Report: Docker_Linux_containerd_arm64 22168

                    
                      9b787847521167b42f6debd67da4dc2d018928d7:2025-12-17:42812
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.76
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.2
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.24
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.21
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.27
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.02
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.7
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.02
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.41
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.68
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.39
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.08
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 105.33
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.25
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.28
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.25
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.28
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.26
358 TestKubernetesUpgrade 796.82
426 TestStartStop/group/no-preload/serial/FirstStart 509.05
437 TestStartStop/group/newest-cni/serial/FirstStart 501.23
438 TestStartStop/group/no-preload/serial/DeployApp 3.04
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 83.14
442 TestStartStop/group/no-preload/serial/SecondStart 370.86
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 98.55
447 TestStartStop/group/newest-cni/serial/SecondStart 373.02
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.72
452 TestStartStop/group/newest-cni/serial/Pause 9.59
480 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 291.44
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1217 00:37:53.305870 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:40:09.433213 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:40:37.147162 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.876783 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.883278 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.894903 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.916642 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.958151 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.039664 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.201376 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.523172 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:58.165332 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:59.447347 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:02.010278 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:07.132199 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:17.374449 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:37.855963 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:18.818734 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:44:40.740201 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:09.433353 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.379086834s)

                                                
                                                
-- stdout --
	* [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46313
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46313 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000056154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 6 (291.254783ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:46:08.713211 1255116 status.go:458] kubeconfig endpoint: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/12112432.pem                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image save --daemon kicbase/echo-server:functional-416001 --alsologtostderr                                                           │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /usr/share/ca-certificates/12112432.pem                                                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/test/nested/copy/1211243/hosts                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp functional-416001:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170430960/001/cp-test.txt                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format short --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format yaml --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                               │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh pgrep buildkitd                                                                                                                   │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /tmp/does/not/exist/cp-test.txt                                                                     │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:47.077849 1249620 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:47.077955 1249620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:47.077958 1249620 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:47.077962 1249620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:47.078209 1249620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:37:47.078611 1249620 out.go:368] Setting JSON to false
	I1217 00:37:47.079403 1249620 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22817,"bootTime":1765909050,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:37:47.079462 1249620 start.go:143] virtualization:  
	I1217 00:37:47.083874 1249620 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:37:47.088923 1249620 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:37:47.089023 1249620 notify.go:221] Checking for updates...
	I1217 00:37:47.093049 1249620 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:47.096528 1249620 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:37:47.100194 1249620 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:37:47.103536 1249620 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:37:47.106797 1249620 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:37:47.110131 1249620 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:47.131149 1249620 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:37:47.131268 1249620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:47.195041 1249620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:37:47.185856639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:37:47.195134 1249620 docker.go:319] overlay module found
	I1217 00:37:47.200386 1249620 out.go:179] * Using the docker driver based on user configuration
	I1217 00:37:47.203383 1249620 start.go:309] selected driver: docker
	I1217 00:37:47.203393 1249620 start.go:927] validating driver "docker" against <nil>
	I1217 00:37:47.203405 1249620 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:37:47.204106 1249620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:47.262382 1249620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:37:47.253485946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:37:47.262538 1249620 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:37:47.262754 1249620 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:37:47.265880 1249620 out.go:179] * Using Docker driver with root privileges
	I1217 00:37:47.268790 1249620 cni.go:84] Creating CNI manager for ""
	I1217 00:37:47.268853 1249620 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:37:47.268861 1249620 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:37:47.268932 1249620 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:47.272212 1249620 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:37:47.275041 1249620 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:37:47.277990 1249620 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:37:47.280873 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:37:47.280909 1249620 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:37:47.280917 1249620 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:47.280930 1249620 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:37:47.281017 1249620 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:37:47.281026 1249620 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:37:47.281391 1249620 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:37:47.281410 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json: {Name:mk1f8807fb33e420cc0d4f5da5e8ec1f77d72d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:47.300587 1249620 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:37:47.300596 1249620 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:37:47.300616 1249620 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:37:47.300638 1249620 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:47.300752 1249620 start.go:364] duration metric: took 100.801µs to acquireMachinesLock for "functional-608344"
	I1217 00:37:47.300777 1249620 start.go:93] Provisioning new machine with config: &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:37:47.300842 1249620 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:37:47.306068 1249620 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1217 00:37:47.306362 1249620 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46313 to docker env.
	I1217 00:37:47.306386 1249620 start.go:159] libmachine.API.Create for "functional-608344" (driver="docker")
	I1217 00:37:47.306409 1249620 client.go:173] LocalClient.Create starting
	I1217 00:37:47.306477 1249620 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 00:37:47.306508 1249620 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:47.306525 1249620 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:47.306573 1249620 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 00:37:47.306592 1249620 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:47.306603 1249620 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:47.306962 1249620 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:37:47.322361 1249620 cli_runner.go:211] docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:37:47.322443 1249620 network_create.go:284] running [docker network inspect functional-608344] to gather additional debugging logs...
	I1217 00:37:47.322457 1249620 cli_runner.go:164] Run: docker network inspect functional-608344
	W1217 00:37:47.337510 1249620 cli_runner.go:211] docker network inspect functional-608344 returned with exit code 1
	I1217 00:37:47.337528 1249620 network_create.go:287] error running [docker network inspect functional-608344]: docker network inspect functional-608344: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-608344 not found
	I1217 00:37:47.337539 1249620 network_create.go:289] output of [docker network inspect functional-608344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-608344 not found
	
	** /stderr **
	I1217 00:37:47.337634 1249620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:47.354391 1249620 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f3680}
	I1217 00:37:47.354423 1249620 network_create.go:124] attempt to create docker network functional-608344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:37:47.354483 1249620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-608344 functional-608344
	I1217 00:37:47.411696 1249620 network_create.go:108] docker network functional-608344 192.168.49.0/24 created
	I1217 00:37:47.411718 1249620 kic.go:121] calculated static IP "192.168.49.2" for the "functional-608344" container
	I1217 00:37:47.411806 1249620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:37:47.427711 1249620 cli_runner.go:164] Run: docker volume create functional-608344 --label name.minikube.sigs.k8s.io=functional-608344 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:37:47.445155 1249620 oci.go:103] Successfully created a docker volume functional-608344
	I1217 00:37:47.445233 1249620 cli_runner.go:164] Run: docker run --rm --name functional-608344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-608344 --entrypoint /usr/bin/test -v functional-608344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:37:47.948682 1249620 oci.go:107] Successfully prepared a docker volume functional-608344
	I1217 00:37:47.948746 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:37:47.948755 1249620 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:37:47.948815 1249620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-608344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:37:51.849405 1249620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-608344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.900544075s)
	I1217 00:37:51.849425 1249620 kic.go:203] duration metric: took 3.90066859s to extract preloaded images to volume ...
	W1217 00:37:51.849565 1249620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 00:37:51.849687 1249620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:37:51.903066 1249620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-608344 --name functional-608344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-608344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-608344 --network functional-608344 --ip 192.168.49.2 --volume functional-608344:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:37:52.213466 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Running}}
	I1217 00:37:52.235175 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:37:52.258972 1249620 cli_runner.go:164] Run: docker exec functional-608344 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:37:52.308518 1249620 oci.go:144] the created container "functional-608344" has a running status.
	I1217 00:37:52.308551 1249620 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa...
	I1217 00:37:53.208026 1249620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:37:53.226885 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:37:53.243635 1249620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:37:53.243647 1249620 kic_runner.go:114] Args: [docker exec --privileged functional-608344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:37:53.281954 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:37:53.300084 1249620 machine.go:94] provisionDockerMachine start ...
	I1217 00:37:53.300164 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:53.316642 1249620 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:53.316974 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:37:53.316981 1249620 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:37:53.317637 1249620 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47132->127.0.0.1:33943: read: connection reset by peer
	I1217 00:37:56.449193 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:37:56.449207 1249620 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:37:56.449269 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:56.467319 1249620 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:56.467623 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:37:56.467631 1249620 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:37:56.606336 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:37:56.606405 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:56.623349 1249620 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:56.623637 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:37:56.623652 1249620 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:37:56.753874 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:37:56.753890 1249620 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:37:56.753919 1249620 ubuntu.go:190] setting up certificates
	I1217 00:37:56.753931 1249620 provision.go:84] configureAuth start
	I1217 00:37:56.753998 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:37:56.770211 1249620 provision.go:143] copyHostCerts
	I1217 00:37:56.770264 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:37:56.770273 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:37:56.770346 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:37:56.770434 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:37:56.770437 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:37:56.770461 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:37:56.770508 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:37:56.770511 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:37:56.770532 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:37:56.770572 1249620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:37:56.858168 1249620 provision.go:177] copyRemoteCerts
	I1217 00:37:56.858219 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:37:56.858255 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:56.875117 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:37:56.969064 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:37:56.985613 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:37:57.003111 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:37:57.022509 1249620 provision.go:87] duration metric: took 268.565373ms to configureAuth
	I1217 00:37:57.022527 1249620 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:37:57.022715 1249620 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:37:57.022722 1249620 machine.go:97] duration metric: took 3.722627932s to provisionDockerMachine
	I1217 00:37:57.022728 1249620 client.go:176] duration metric: took 9.716314707s to LocalClient.Create
	I1217 00:37:57.022750 1249620 start.go:167] duration metric: took 9.716365128s to libmachine.API.Create "functional-608344"
	I1217 00:37:57.022764 1249620 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:37:57.022773 1249620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:37:57.022826 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:37:57.022864 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:57.040877 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:37:57.137766 1249620 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:37:57.140898 1249620 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:37:57.140915 1249620 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:37:57.140925 1249620 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:37:57.140979 1249620 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:37:57.141064 1249620 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:37:57.141148 1249620 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:37:57.141190 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:37:57.148704 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:37:57.166326 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:37:57.183180 1249620 start.go:296] duration metric: took 160.401967ms for postStartSetup
	I1217 00:37:57.183557 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:37:57.201225 1249620 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:37:57.201852 1249620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:37:57.201900 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:57.220028 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:37:57.310294 1249620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:37:57.314936 1249620 start.go:128] duration metric: took 10.014079181s to createHost
	I1217 00:37:57.314951 1249620 start.go:83] releasing machines lock for "functional-608344", held for 10.014191174s
	I1217 00:37:57.315028 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:37:57.336100 1249620 out.go:179] * Found network options:
	I1217 00:37:57.339083 1249620 out.go:179]   - HTTP_PROXY=localhost:46313
	W1217 00:37:57.342000 1249620 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1217 00:37:57.344855 1249620 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1217 00:37:57.347764 1249620 ssh_runner.go:195] Run: cat /version.json
	I1217 00:37:57.347805 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:57.347818 1249620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:37:57.347874 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:37:57.367319 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:37:57.368145 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:37:57.457179 1249620 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:57.546450 1249620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:37:57.551120 1249620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:37:57.551185 1249620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:37:57.578033 1249620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 00:37:57.578047 1249620 start.go:496] detecting cgroup driver to use...
	I1217 00:37:57.578079 1249620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:37:57.578126 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:37:57.594624 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:37:57.607799 1249620 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:37:57.607861 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:37:57.626516 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:37:57.646515 1249620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:37:57.762826 1249620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:37:57.880691 1249620 docker.go:234] disabling docker service ...
	I1217 00:37:57.880766 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:37:57.902308 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:37:57.916156 1249620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:37:58.047046 1249620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:37:58.174365 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:37:58.187558 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:37:58.202152 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:37:58.210580 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:37:58.219138 1249620 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:37:58.219194 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:37:58.227736 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:37:58.236218 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:37:58.245396 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:37:58.253913 1249620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:37:58.261959 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:37:58.270266 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:37:58.279078 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:37:58.287787 1249620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:37:58.295133 1249620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:37:58.302379 1249620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:58.443740 1249620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:37:58.584688 1249620 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:37:58.584746 1249620 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:37:58.588390 1249620 start.go:564] Will wait 60s for crictl version
	I1217 00:37:58.588441 1249620 ssh_runner.go:195] Run: which crictl
	I1217 00:37:58.591767 1249620 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:37:58.618628 1249620 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:37:58.618687 1249620 ssh_runner.go:195] Run: containerd --version
	I1217 00:37:58.639379 1249620 ssh_runner.go:195] Run: containerd --version
	I1217 00:37:58.662959 1249620 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:37:58.665926 1249620 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:58.682046 1249620 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:37:58.685871 1249620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:37:58.695416 1249620 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:37:58.695549 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:37:58.695626 1249620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:58.726170 1249620 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:37:58.726182 1249620 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:37:58.726241 1249620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:58.752080 1249620 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:37:58.752092 1249620 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:37:58.752098 1249620 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:37:58.752195 1249620 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:37:58.752260 1249620 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:37:58.782017 1249620 cni.go:84] Creating CNI manager for ""
	I1217 00:37:58.782028 1249620 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:37:58.782047 1249620 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:37:58.782068 1249620 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:37:58.782183 1249620 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:37:58.782249 1249620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:37:58.790084 1249620 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:37:58.790144 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:37:58.798586 1249620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:37:58.811228 1249620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:37:58.824689 1249620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:37:58.839221 1249620 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:37:58.842780 1249620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:37:58.852390 1249620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:58.975284 1249620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:58.992908 1249620 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:37:58.992919 1249620 certs.go:195] generating shared ca certs ...
	I1217 00:37:58.992933 1249620 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:58.993079 1249620 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:37:58.993124 1249620 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:37:58.993131 1249620 certs.go:257] generating profile certs ...
	I1217 00:37:58.993188 1249620 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:37:58.993197 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt with IP's: []
	I1217 00:37:59.036718 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt ...
	I1217 00:37:59.036734 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: {Name:mke1055b1743c3fc8eb6e33c072f1d335124c556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.036963 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key ...
	I1217 00:37:59.036970 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key: {Name:mk461e2e0eab3edf14ca28ae6602a298e3e17f65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.037073 1249620 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:37:59.037084 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:37:59.167465 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 ...
	I1217 00:37:59.167483 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443: {Name:mk8dc0d6c0daab9347068427e8209973d836c8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.167673 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443 ...
	I1217 00:37:59.167681 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443: {Name:mkc3d620dd9591dfedaabfc3021cd8140ed4a374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.167762 1249620 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt
	I1217 00:37:59.167834 1249620 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key
	I1217 00:37:59.167883 1249620 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:37:59.167896 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt with IP's: []
	I1217 00:37:59.557517 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt ...
	I1217 00:37:59.557533 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt: {Name:mkc09dd685c225df59597749144f64a4b663565b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.557741 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key ...
	I1217 00:37:59.557749 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key: {Name:mk1a97bc531dde4d6b10d1362faa778722096d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:59.557950 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:37:59.557991 1249620 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:37:59.558001 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:37:59.558027 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:37:59.558051 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:37:59.558073 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:37:59.558115 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:37:59.558718 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:37:59.576791 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:37:59.597061 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:37:59.615468 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:37:59.633601 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:37:59.651138 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:37:59.669128 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:37:59.686690 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:37:59.704426 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:37:59.723065 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:37:59.741082 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:37:59.759073 1249620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:37:59.771607 1249620 ssh_runner.go:195] Run: openssl version
	I1217 00:37:59.777781 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:37:59.785025 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:37:59.793092 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:37:59.796760 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:37:59.796815 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:37:59.837817 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:59.845315 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:59.852901 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:59.860449 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:37:59.869748 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:59.874373 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:59.874439 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:59.915911 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:37:59.926380 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:37:59.934031 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:37:59.941218 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:37:59.948698 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:37:59.952356 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:37:59.952416 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:37:59.993749 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:38:00.006389 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 00:38:00.056235 1249620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:38:00.068542 1249620 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:38:00.068590 1249620 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:38:00.068662 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:38:00.068728 1249620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:38:00.239824 1249620 cri.go:89] found id: ""
	I1217 00:38:00.239902 1249620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:38:00.262961 1249620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:38:00.281451 1249620 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:38:00.281523 1249620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:38:00.300340 1249620 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:38:00.300352 1249620 kubeadm.go:158] found existing configuration files:
	
	I1217 00:38:00.300408 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:38:00.323462 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:38:00.323527 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:38:00.334356 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:38:00.344459 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:38:00.344526 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:38:00.354787 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:38:00.364585 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:38:00.364650 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:38:00.373910 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:38:00.387470 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:38:00.387536 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:38:00.397119 1249620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:38:00.514797 1249620 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:38:00.515278 1249620 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:38:00.598806 1249620 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:42:04.803749 1249620 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:42:04.803773 1249620 kubeadm.go:319] 
	I1217 00:42:04.803890 1249620 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:42:04.809840 1249620 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:42:04.809899 1249620 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:04.809998 1249620 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:04.810058 1249620 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:42:04.810098 1249620 kubeadm.go:319] OS: Linux
	I1217 00:42:04.810145 1249620 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:04.810197 1249620 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:42:04.810250 1249620 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:04.810307 1249620 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:04.810358 1249620 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:04.810412 1249620 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:04.810468 1249620 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:04.810540 1249620 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:04.810604 1249620 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:42:04.810676 1249620 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:04.810773 1249620 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:04.810862 1249620 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:04.810922 1249620 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:04.812829 1249620 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:04.812911 1249620 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:04.812978 1249620 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:04.813044 1249620 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:04.813099 1249620 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:04.813158 1249620 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:04.813206 1249620 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:04.813258 1249620 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:04.813377 1249620 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:42:04.813428 1249620 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:04.813545 1249620 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:42:04.813609 1249620 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:04.813694 1249620 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:04.813737 1249620 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:04.813791 1249620 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:04.813841 1249620 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:04.813896 1249620 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:04.813950 1249620 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:04.814011 1249620 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:04.814064 1249620 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:04.814143 1249620 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:04.814207 1249620 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:42:04.819008 1249620 out.go:252]   - Booting up control plane ...
	I1217 00:42:04.819129 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:42:04.819221 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:42:04.819294 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:42:04.819396 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:42:04.819491 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:42:04.819595 1249620 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:42:04.819684 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:42:04.819723 1249620 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:42:04.819857 1249620 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:42:04.819983 1249620 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:42:04.820051 1249620 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000056154s
	I1217 00:42:04.820054 1249620 kubeadm.go:319] 
	I1217 00:42:04.820109 1249620 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:42:04.820140 1249620 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:42:04.820258 1249620 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:42:04.820265 1249620 kubeadm.go:319] 
	I1217 00:42:04.820368 1249620 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:42:04.820400 1249620 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:42:04.820442 1249620 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 00:42:04.820572 1249620 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000056154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:42:04.820670 1249620 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 00:42:04.820814 1249620 kubeadm.go:319] 
	I1217 00:42:05.230104 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:05.243518 1249620 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:05.243570 1249620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:05.251369 1249620 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:05.251377 1249620 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:05.251426 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:42:05.259153 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:05.259207 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:05.266346 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:42:05.273811 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:05.273869 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:05.281258 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:42:05.288736 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:05.288792 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:05.296179 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:42:05.303544 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:05.303599 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:05.310896 1249620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:05.347419 1249620 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:42:05.347466 1249620 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:05.416369 1249620 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:05.416458 1249620 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:42:05.416492 1249620 kubeadm.go:319] OS: Linux
	I1217 00:42:05.416536 1249620 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:05.416582 1249620 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:42:05.416628 1249620 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:05.416675 1249620 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:05.416721 1249620 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:05.416768 1249620 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:05.416812 1249620 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:05.416858 1249620 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:05.416907 1249620 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:42:05.486343 1249620 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:05.486439 1249620 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:05.486528 1249620 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:05.492558 1249620 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:05.496200 1249620 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:05.496287 1249620 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:05.496355 1249620 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:05.496453 1249620 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:42:05.496518 1249620 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:42:05.496591 1249620 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:42:05.496648 1249620 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:42:05.496715 1249620 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:42:05.496780 1249620 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:42:05.496902 1249620 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:42:05.496980 1249620 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:42:05.497270 1249620 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:42:05.497325 1249620 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:05.785525 1249620 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:06.683665 1249620 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:07.376722 1249620 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:07.546496 1249620 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:07.778465 1249620 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:07.779108 1249620 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:07.781678 1249620 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:42:07.784866 1249620 out.go:252]   - Booting up control plane ...
	I1217 00:42:07.784967 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:42:07.785044 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:42:07.785114 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:42:07.805276 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:42:07.805373 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:42:07.816752 1249620 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:42:07.817521 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:42:07.817852 1249620 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:42:07.970505 1249620 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:42:07.970612 1249620 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:46:07.971562 1249620 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001108504s
	I1217 00:46:07.971586 1249620 kubeadm.go:319] 
	I1217 00:46:07.971690 1249620 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:46:07.971746 1249620 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:46:07.972080 1249620 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:46:07.972086 1249620 kubeadm.go:319] 
	I1217 00:46:07.972275 1249620 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:46:07.972565 1249620 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:46:07.972632 1249620 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:46:07.972636 1249620 kubeadm.go:319] 
	I1217 00:46:07.977426 1249620 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:46:07.977935 1249620 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:46:07.978057 1249620 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:46:07.978313 1249620 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:46:07.978322 1249620 kubeadm.go:319] 
	I1217 00:46:07.978405 1249620 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:46:07.978459 1249620 kubeadm.go:403] duration metric: took 8m7.909874388s to StartCluster
	I1217 00:46:07.978499 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:46:07.978557 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:46:08.008871 1249620 cri.go:89] found id: ""
	I1217 00:46:08.008898 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.008911 1249620 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:46:08.008917 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:46:08.008998 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:46:08.034772 1249620 cri.go:89] found id: ""
	I1217 00:46:08.034786 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.034793 1249620 logs.go:284] No container was found matching "etcd"
	I1217 00:46:08.034801 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:46:08.034868 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:46:08.061353 1249620 cri.go:89] found id: ""
	I1217 00:46:08.061367 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.061374 1249620 logs.go:284] No container was found matching "coredns"
	I1217 00:46:08.061379 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:46:08.061440 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:46:08.087277 1249620 cri.go:89] found id: ""
	I1217 00:46:08.087291 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.087299 1249620 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:46:08.087304 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:46:08.087364 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:46:08.116240 1249620 cri.go:89] found id: ""
	I1217 00:46:08.116254 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.116262 1249620 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:46:08.116267 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:46:08.116324 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:46:08.148758 1249620 cri.go:89] found id: ""
	I1217 00:46:08.148772 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.148779 1249620 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:46:08.148785 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:46:08.148846 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:46:08.176772 1249620 cri.go:89] found id: ""
	I1217 00:46:08.176785 1249620 logs.go:282] 0 containers: []
	W1217 00:46:08.176792 1249620 logs.go:284] No container was found matching "kindnet"
	I1217 00:46:08.176800 1249620 logs.go:123] Gathering logs for container status ...
	I1217 00:46:08.176811 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:46:08.203752 1249620 logs.go:123] Gathering logs for kubelet ...
	I1217 00:46:08.203768 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:46:08.259944 1249620 logs.go:123] Gathering logs for dmesg ...
	I1217 00:46:08.259962 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:46:08.274608 1249620 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:46:08.274624 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:46:08.345527 1249620 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:46:08.332278    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.332809    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.334525    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.339713    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.341262    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:46:08.332278    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.332809    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.334525    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.339713    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:08.341262    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:46:08.345538 1249620 logs.go:123] Gathering logs for containerd ...
	I1217 00:46:08.345549 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 00:46:08.383462 1249620 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:46:08.383508 1249620 out.go:285] * 
	W1217 00:46:08.383620 1249620 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:46:08.383693 1249620 out.go:285] * 
	W1217 00:46:08.385855 1249620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:46:08.391282 1249620 out.go:203] 
	W1217 00:46:08.394806 1249620 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001108504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:46:08.394843 1249620 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:46:08.394864 1249620 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:46:08.398711 1249620 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522436142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522515290Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522617913Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522688379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522755071Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522862092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522927668Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522998463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523074936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523156602Z" level=info msg="Connect containerd service"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523516648Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.524216882Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538375053Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538622506Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538550875Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.544796580Z" level=info msg="Start recovering state"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581522148Z" level=info msg="Start event monitor"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581757302Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581827727Z" level=info msg="Start streaming server"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581904036Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581963195Z" level=info msg="runtime interface starting up..."
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.582025695Z" level=info msg="starting plugins..."
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.582100231Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:37:58 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.584443956Z" level=info msg="containerd successfully booted in 0.088658s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:46:09.349341    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:09.349961    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:09.351547    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:09.351955    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:46:09.353476    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:46:09 up  6:28,  0 user,  load average: 0.41, 0.60, 1.26
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:46:05 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 00:46:06 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:06 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:06 functional-608344 kubelet[4715]: E1217 00:46:06.669538    4715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 00:46:07 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:07 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:07 functional-608344 kubelet[4721]: E1217 00:46:07.412323    4721 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 00:46:08 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:08 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:08 functional-608344 kubelet[4768]: E1217 00:46:08.187734    4768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 00:46:08 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:08 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:46:08 functional-608344 kubelet[4829]: E1217 00:46:08.947752    4829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 6 (314.175346ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:46:09.779258 1255328 status.go:458] kubeconfig endpoint: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:46:09.797340 1211243 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --alsologtostderr -v=8
E1217 00:46:56.876957 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:47:24.581545 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:50:09.433346 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:51:32.509197 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:51:56.877112 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --alsologtostderr -v=8: exit status 80 (6m5.594814265s)

                                                
                                                
-- stdout --
	* [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:46:09.841325 1255403 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:46:09.841557 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841588 1255403 out.go:374] Setting ErrFile to fd 2...
	I1217 00:46:09.841608 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841909 1255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:46:09.842319 1255403 out.go:368] Setting JSON to false
	I1217 00:46:09.843208 1255403 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23320,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:46:09.843304 1255403 start.go:143] virtualization:  
	I1217 00:46:09.846714 1255403 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:46:09.849718 1255403 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:46:09.849800 1255403 notify.go:221] Checking for updates...
	I1217 00:46:09.855303 1255403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:46:09.858207 1255403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:09.860971 1255403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:46:09.863762 1255403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:46:09.866648 1255403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:46:09.869965 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:09.870075 1255403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:46:09.899794 1255403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:46:09.899910 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:09.954202 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:09.945326941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:09.954303 1255403 docker.go:319] overlay module found
	I1217 00:46:09.957332 1255403 out.go:179] * Using the docker driver based on existing profile
	I1217 00:46:09.960126 1255403 start.go:309] selected driver: docker
	I1217 00:46:09.960147 1255403 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:09.960238 1255403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:46:09.960367 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:10.027336 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:10.013273525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:10.027811 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:10.027879 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:10.027939 1255403 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:10.033595 1255403 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:46:10.036654 1255403 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:46:10.039839 1255403 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:46:10.042883 1255403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:46:10.042915 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:10.042969 1255403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:46:10.042980 1255403 cache.go:65] Caching tarball of preloaded images
	I1217 00:46:10.043067 1255403 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:46:10.043077 1255403 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:46:10.043192 1255403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:46:10.064109 1255403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:46:10.064135 1255403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:46:10.064157 1255403 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:46:10.064192 1255403 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:46:10.064257 1255403 start.go:364] duration metric: took 41.379µs to acquireMachinesLock for "functional-608344"
	I1217 00:46:10.064326 1255403 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:46:10.064336 1255403 fix.go:54] fixHost starting: 
	I1217 00:46:10.064635 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:10.082218 1255403 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:46:10.082251 1255403 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:46:10.085538 1255403 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:46:10.085593 1255403 machine.go:94] provisionDockerMachine start ...
	I1217 00:46:10.085773 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.104030 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.104380 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.104395 1255403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:46:10.233303 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.233328 1255403 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:46:10.233404 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.250839 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.251149 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.251164 1255403 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:46:10.396645 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.396749 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.422445 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.422746 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.422762 1255403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:46:10.553926 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:46:10.553954 1255403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:46:10.554002 1255403 ubuntu.go:190] setting up certificates
	I1217 00:46:10.554025 1255403 provision.go:84] configureAuth start
	I1217 00:46:10.554113 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:10.571790 1255403 provision.go:143] copyHostCerts
	I1217 00:46:10.571842 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571886 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:46:10.571897 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571976 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:46:10.572067 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572088 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:46:10.572098 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572127 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:46:10.572172 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572192 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:46:10.572198 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572222 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:46:10.572274 1255403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:46:10.693030 1255403 provision.go:177] copyRemoteCerts
	I1217 00:46:10.693099 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:46:10.693140 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.710526 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.805595 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:46:10.805709 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:46:10.823672 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:46:10.823734 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:46:10.841740 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:46:10.841805 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:46:10.859736 1255403 provision.go:87] duration metric: took 305.682111ms to configureAuth
	I1217 00:46:10.859764 1255403 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:46:10.859948 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:10.859960 1255403 machine.go:97] duration metric: took 774.357768ms to provisionDockerMachine
	I1217 00:46:10.859968 1255403 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:46:10.859979 1255403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:46:10.860038 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:46:10.860081 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.876877 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.973995 1255403 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:46:10.977418 1255403 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:46:10.977440 1255403 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:46:10.977445 1255403 command_runner.go:130] > VERSION_ID="12"
	I1217 00:46:10.977450 1255403 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:46:10.977468 1255403 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:46:10.977472 1255403 command_runner.go:130] > ID=debian
	I1217 00:46:10.977477 1255403 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:46:10.977482 1255403 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:46:10.977488 1255403 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:46:10.977542 1255403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:46:10.977565 1255403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:46:10.977576 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:46:10.977631 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:46:10.977740 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:46:10.977753 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /etc/ssl/certs/12112432.pem
	I1217 00:46:10.977836 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:46:10.977845 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> /etc/test/nested/copy/1211243/hosts
	I1217 00:46:10.977888 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:46:10.985858 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:11.003616 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:46:11.025062 1255403 start.go:296] duration metric: took 165.078815ms for postStartSetup
	I1217 00:46:11.025171 1255403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:46:11.025235 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.042501 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.135058 1255403 command_runner.go:130] > 18%
	I1217 00:46:11.135791 1255403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:46:11.141537 1255403 command_runner.go:130] > 159G
	I1217 00:46:11.142252 1255403 fix.go:56] duration metric: took 1.077909712s for fixHost
	I1217 00:46:11.142316 1255403 start.go:83] releasing machines lock for "functional-608344", held for 1.07800111s
	I1217 00:46:11.142412 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:11.164178 1255403 ssh_runner.go:195] Run: cat /version.json
	I1217 00:46:11.164239 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.164497 1255403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:46:11.164553 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.196976 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.203865 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.389604 1255403 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:46:11.389719 1255403 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:46:11.389906 1255403 ssh_runner.go:195] Run: systemctl --version
	I1217 00:46:11.396314 1255403 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:46:11.396351 1255403 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:46:11.396781 1255403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:46:11.401747 1255403 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:46:11.401791 1255403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:46:11.401850 1255403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:46:11.410012 1255403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:46:11.410035 1255403 start.go:496] detecting cgroup driver to use...
	I1217 00:46:11.410068 1255403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:46:11.410119 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:46:11.427912 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:46:11.441702 1255403 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:46:11.441797 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:46:11.458922 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:46:11.473296 1255403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:46:11.602661 1255403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:46:11.727834 1255403 docker.go:234] disabling docker service ...
	I1217 00:46:11.727932 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:46:11.743775 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:46:11.756449 1255403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:46:11.884208 1255403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:46:12.041744 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:46:12.055323 1255403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:46:12.069025 1255403 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:46:12.070254 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:46:12.080613 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:46:12.090397 1255403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:46:12.090539 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:46:12.100248 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.110370 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:46:12.120135 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.130289 1255403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:46:12.139404 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:46:12.148731 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:46:12.158190 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:46:12.167677 1255403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:46:12.175393 1255403 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:46:12.175487 1255403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:46:12.183394 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.301782 1255403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:46:12.439684 1255403 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:46:12.439765 1255403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:46:12.443346 1255403 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 00:46:12.443371 1255403 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:46:12.443378 1255403 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1217 00:46:12.443385 1255403 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:12.443391 1255403 command_runner.go:130] > Access: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443396 1255403 command_runner.go:130] > Modify: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443401 1255403 command_runner.go:130] > Change: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443405 1255403 command_runner.go:130] >  Birth: -
	I1217 00:46:12.443632 1255403 start.go:564] Will wait 60s for crictl version
	I1217 00:46:12.443703 1255403 ssh_runner.go:195] Run: which crictl
	I1217 00:46:12.446726 1255403 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:46:12.447174 1255403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:46:12.472886 1255403 command_runner.go:130] > Version:  0.1.0
	I1217 00:46:12.473228 1255403 command_runner.go:130] > RuntimeName:  containerd
	I1217 00:46:12.473244 1255403 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 00:46:12.473249 1255403 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:46:12.475292 1255403 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:46:12.475358 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.494552 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.496407 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.517873 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.525827 1255403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:46:12.528776 1255403 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:46:12.544531 1255403 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:46:12.548354 1255403 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:46:12.548680 1255403 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:46:12.548798 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:12.548865 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.573132 1255403 command_runner.go:130] > {
	I1217 00:46:12.573158 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.573163 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573172 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.573185 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573191 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.573195 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573199 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573208 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.573215 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573220 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.573226 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573230 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573234 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573237 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573252 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.573259 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573265 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.573268 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573273 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573284 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.573288 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573292 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.573296 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573300 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573306 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573310 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573323 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.573327 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573333 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.573339 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573350 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573361 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.573365 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573371 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.573376 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.573379 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573385 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573389 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573398 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.573404 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573409 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.573412 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573418 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573426 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.573432 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573437 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.573440 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573446 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573449 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573455 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573459 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573465 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573468 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573475 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.573478 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573484 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.573490 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573494 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573504 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.573508 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573512 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.573521 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573529 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573541 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573546 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573551 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573555 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573560 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573567 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.573574 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573580 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.573583 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573590 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573598 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.573605 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573609 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.573613 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573622 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573625 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573629 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573634 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573660 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573664 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573671 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.573681 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573690 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.573694 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573698 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573710 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.573714 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573719 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.573725 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573729 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573733 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573736 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573743 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.573753 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573759 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.573762 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573765 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573773 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.573776 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573784 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.573790 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573794 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573800 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573804 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573816 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573819 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573822 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573830 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.573836 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573842 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.573845 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573851 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573859 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.573864 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573868 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.573875 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573879 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.573884 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573888 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573894 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.573897 1255403 command_runner.go:130] >     }
	I1217 00:46:12.573900 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.573903 1255403 command_runner.go:130] > }
	I1217 00:46:12.574073 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.574086 1255403 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:46:12.574147 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.596238 1255403 command_runner.go:130] > {
	I1217 00:46:12.596261 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.596266 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596284 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.596300 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596310 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.596314 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596318 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596329 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.596337 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596342 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.596346 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596353 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596356 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596362 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596372 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.596380 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596386 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.596389 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596393 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596402 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.596408 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596413 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.596417 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596422 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596427 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596432 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596442 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.596446 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596451 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.596457 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596464 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596472 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.596477 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596482 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.596486 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.596492 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596500 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596506 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596513 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.596518 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596523 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.596529 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596533 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596540 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.596547 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596551 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.596554 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596569 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596572 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596577 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596585 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596591 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596594 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596622 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.596626 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596638 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.596641 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596645 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596659 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.596662 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596667 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.596673 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596683 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596690 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596694 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596697 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596707 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596710 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596717 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.596726 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596733 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.596739 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596743 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596751 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.596755 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596761 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.596765 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596771 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596775 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596784 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596788 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596791 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596795 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596808 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.596813 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596818 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.596824 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596828 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596836 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.596839 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596847 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.596853 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596857 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596863 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596866 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596873 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.596879 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596885 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.596889 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596900 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596908 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.596914 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596923 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.596927 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596931 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596936 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596940 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596947 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596950 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596953 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596960 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.596967 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596971 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.596975 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596981 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596989 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.596996 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.597000 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.597004 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.597008 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.597013 1255403 command_runner.go:130] >       },
	I1217 00:46:12.597023 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.597027 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.597030 1255403 command_runner.go:130] >     }
	I1217 00:46:12.597033 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.597039 1255403 command_runner.go:130] > }
	I1217 00:46:12.599655 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.599676 1255403 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:46:12.599685 1255403 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:46:12.599841 1255403 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:46:12.599942 1255403 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:46:12.623140 1255403 command_runner.go:130] > {
	I1217 00:46:12.623159 1255403 command_runner.go:130] >   "cniconfig": {
	I1217 00:46:12.623164 1255403 command_runner.go:130] >     "Networks": [
	I1217 00:46:12.623168 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623173 1255403 command_runner.go:130] >         "Config": {
	I1217 00:46:12.623178 1255403 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 00:46:12.623184 1255403 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 00:46:12.623188 1255403 command_runner.go:130] >           "Plugins": [
	I1217 00:46:12.623192 1255403 command_runner.go:130] >             {
	I1217 00:46:12.623196 1255403 command_runner.go:130] >               "Network": {
	I1217 00:46:12.623200 1255403 command_runner.go:130] >                 "ipam": {},
	I1217 00:46:12.623205 1255403 command_runner.go:130] >                 "type": "loopback"
	I1217 00:46:12.623209 1255403 command_runner.go:130] >               },
	I1217 00:46:12.623214 1255403 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 00:46:12.623218 1255403 command_runner.go:130] >             }
	I1217 00:46:12.623221 1255403 command_runner.go:130] >           ],
	I1217 00:46:12.623230 1255403 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 00:46:12.623234 1255403 command_runner.go:130] >         },
	I1217 00:46:12.623239 1255403 command_runner.go:130] >         "IFName": "lo"
	I1217 00:46:12.623243 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623246 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623250 1255403 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 00:46:12.623253 1255403 command_runner.go:130] >     "PluginDirs": [
	I1217 00:46:12.623257 1255403 command_runner.go:130] >       "/opt/cni/bin"
	I1217 00:46:12.623260 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623265 1255403 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 00:46:12.623269 1255403 command_runner.go:130] >     "Prefix": "eth"
	I1217 00:46:12.623272 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623284 1255403 command_runner.go:130] >   "config": {
	I1217 00:46:12.623288 1255403 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 00:46:12.623292 1255403 command_runner.go:130] >       "/etc/cdi",
	I1217 00:46:12.623297 1255403 command_runner.go:130] >       "/var/run/cdi"
	I1217 00:46:12.623300 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623303 1255403 command_runner.go:130] >     "cni": {
	I1217 00:46:12.623306 1255403 command_runner.go:130] >       "binDir": "",
	I1217 00:46:12.623310 1255403 command_runner.go:130] >       "binDirs": [
	I1217 00:46:12.623314 1255403 command_runner.go:130] >         "/opt/cni/bin"
	I1217 00:46:12.623317 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.623322 1255403 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 00:46:12.623325 1255403 command_runner.go:130] >       "confTemplate": "",
	I1217 00:46:12.623329 1255403 command_runner.go:130] >       "ipPref": "",
	I1217 00:46:12.623333 1255403 command_runner.go:130] >       "maxConfNum": 1,
	I1217 00:46:12.623337 1255403 command_runner.go:130] >       "setupSerially": false,
	I1217 00:46:12.623341 1255403 command_runner.go:130] >       "useInternalLoopback": false
	I1217 00:46:12.623344 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623352 1255403 command_runner.go:130] >     "containerd": {
	I1217 00:46:12.623356 1255403 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 00:46:12.623361 1255403 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 00:46:12.623366 1255403 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 00:46:12.623369 1255403 command_runner.go:130] >       "runtimes": {
	I1217 00:46:12.623372 1255403 command_runner.go:130] >         "runc": {
	I1217 00:46:12.623377 1255403 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 00:46:12.623381 1255403 command_runner.go:130] >           "PodAnnotations": null,
	I1217 00:46:12.623386 1255403 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 00:46:12.623391 1255403 command_runner.go:130] >           "cgroupWritable": false,
	I1217 00:46:12.623395 1255403 command_runner.go:130] >           "cniConfDir": "",
	I1217 00:46:12.623399 1255403 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 00:46:12.623403 1255403 command_runner.go:130] >           "io_type": "",
	I1217 00:46:12.623406 1255403 command_runner.go:130] >           "options": {
	I1217 00:46:12.623410 1255403 command_runner.go:130] >             "BinaryName": "",
	I1217 00:46:12.623414 1255403 command_runner.go:130] >             "CriuImagePath": "",
	I1217 00:46:12.623421 1255403 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 00:46:12.623426 1255403 command_runner.go:130] >             "IoGid": 0,
	I1217 00:46:12.623429 1255403 command_runner.go:130] >             "IoUid": 0,
	I1217 00:46:12.623434 1255403 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 00:46:12.623437 1255403 command_runner.go:130] >             "Root": "",
	I1217 00:46:12.623441 1255403 command_runner.go:130] >             "ShimCgroup": "",
	I1217 00:46:12.623445 1255403 command_runner.go:130] >             "SystemdCgroup": false
	I1217 00:46:12.623448 1255403 command_runner.go:130] >           },
	I1217 00:46:12.623453 1255403 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 00:46:12.623459 1255403 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 00:46:12.623463 1255403 command_runner.go:130] >           "runtimePath": "",
	I1217 00:46:12.623468 1255403 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 00:46:12.623473 1255403 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 00:46:12.623476 1255403 command_runner.go:130] >           "snapshotter": ""
	I1217 00:46:12.623479 1255403 command_runner.go:130] >         }
	I1217 00:46:12.623483 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623486 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623495 1255403 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 00:46:12.623500 1255403 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 00:46:12.623507 1255403 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 00:46:12.623511 1255403 command_runner.go:130] >     "disableApparmor": false,
	I1217 00:46:12.623517 1255403 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 00:46:12.623522 1255403 command_runner.go:130] >     "disableProcMount": false,
	I1217 00:46:12.623526 1255403 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 00:46:12.623530 1255403 command_runner.go:130] >     "enableCDI": true,
	I1217 00:46:12.623534 1255403 command_runner.go:130] >     "enableSelinux": false,
	I1217 00:46:12.623538 1255403 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 00:46:12.623542 1255403 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 00:46:12.623547 1255403 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 00:46:12.623551 1255403 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 00:46:12.623555 1255403 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 00:46:12.623559 1255403 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 00:46:12.623563 1255403 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 00:46:12.623571 1255403 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623576 1255403 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 00:46:12.623581 1255403 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623585 1255403 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 00:46:12.623590 1255403 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 00:46:12.623593 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623596 1255403 command_runner.go:130] >   "features": {
	I1217 00:46:12.623601 1255403 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 00:46:12.623603 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623607 1255403 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 00:46:12.623617 1255403 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623626 1255403 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623630 1255403 command_runner.go:130] >   "runtimeHandlers": [
	I1217 00:46:12.623632 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623636 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623640 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623645 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623648 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623651 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623654 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623657 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623662 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623666 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623670 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623673 1255403 command_runner.go:130] >       "name": "runc"
	I1217 00:46:12.623676 1255403 command_runner.go:130] >     }
	I1217 00:46:12.623678 1255403 command_runner.go:130] >   ],
	I1217 00:46:12.623682 1255403 command_runner.go:130] >   "status": {
	I1217 00:46:12.623685 1255403 command_runner.go:130] >     "conditions": [
	I1217 00:46:12.623688 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623692 1255403 command_runner.go:130] >         "message": "",
	I1217 00:46:12.623695 1255403 command_runner.go:130] >         "reason": "",
	I1217 00:46:12.623699 1255403 command_runner.go:130] >         "status": true,
	I1217 00:46:12.623708 1255403 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 00:46:12.623711 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623714 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623721 1255403 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 00:46:12.623726 1255403 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 00:46:12.623729 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623733 1255403 command_runner.go:130] >         "type": "NetworkReady"
	I1217 00:46:12.623737 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623739 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623760 1255403 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 00:46:12.623766 1255403 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 00:46:12.623771 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623776 1255403 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 00:46:12.623779 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623782 1255403 command_runner.go:130] >     ]
	I1217 00:46:12.623784 1255403 command_runner.go:130] >   }
	I1217 00:46:12.623787 1255403 command_runner.go:130] > }
	I1217 00:46:12.625494 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:12.625564 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:12.625600 1255403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:46:12.625679 1255403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:46:12.625821 1255403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:46:12.625903 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:46:12.632727 1255403 command_runner.go:130] > kubeadm
	I1217 00:46:12.632744 1255403 command_runner.go:130] > kubectl
	I1217 00:46:12.632749 1255403 command_runner.go:130] > kubelet
	I1217 00:46:12.633544 1255403 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:46:12.633634 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:46:12.641025 1255403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:46:12.653291 1255403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:46:12.665363 1255403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:46:12.678080 1255403 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:46:12.681502 1255403 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:46:12.681599 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.825775 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:13.622571 1255403 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:46:13.622593 1255403 certs.go:195] generating shared ca certs ...
	I1217 00:46:13.622609 1255403 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:13.622746 1255403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:46:13.622792 1255403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:46:13.622803 1255403 certs.go:257] generating profile certs ...
	I1217 00:46:13.622905 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:46:13.622962 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:46:13.623005 1255403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:46:13.623018 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:46:13.623032 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:46:13.623044 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:46:13.623063 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:46:13.623080 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:46:13.623092 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:46:13.623103 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:46:13.623112 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:46:13.623163 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:46:13.623197 1255403 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:46:13.623208 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:46:13.623239 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:46:13.623268 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:46:13.623296 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:46:13.623339 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:13.623376 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem -> /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.623391 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.623403 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.630954 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:46:13.648792 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:46:13.668204 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:46:13.687794 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:46:13.706777 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:46:13.724521 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:46:13.741552 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:46:13.758610 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:46:13.775595 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:46:13.791737 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:46:13.808409 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:46:13.825079 1255403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:46:13.838395 1255403 ssh_runner.go:195] Run: openssl version
	I1217 00:46:13.844664 1255403 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:46:13.845138 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.852395 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:46:13.860295 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864169 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864290 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864356 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.907286 1255403 command_runner.go:130] > b5213941
	I1217 00:46:13.907795 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:46:13.915373 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.922487 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:46:13.929849 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933445 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933486 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933532 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.974007 1255403 command_runner.go:130] > 51391683
	I1217 00:46:13.974086 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:46:13.981522 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.988760 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:46:13.996178 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.999808 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000049 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000110 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.042220 1255403 command_runner.go:130] > 3ec20f2e
	I1217 00:46:14.042784 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:46:14.050625 1255403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054447 1255403 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054541 1255403 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:46:14.054555 1255403 command_runner.go:130] > Device: 259,1	Inode: 1315986     Links: 1
	I1217 00:46:14.054575 1255403 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:14.054585 1255403 command_runner.go:130] > Access: 2025-12-17 00:42:05.487679973 +0000
	I1217 00:46:14.054596 1255403 command_runner.go:130] > Modify: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054601 1255403 command_runner.go:130] > Change: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054606 1255403 command_runner.go:130] >  Birth: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054705 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:46:14.095552 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.096144 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:46:14.136799 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.137343 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:46:14.178363 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.178447 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:46:14.219183 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.219732 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:46:14.260450 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.260974 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:46:14.301394 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.301907 1255403 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:14.302001 1255403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:46:14.302068 1255403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:46:14.331155 1255403 cri.go:89] found id: ""
	I1217 00:46:14.331262 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:46:14.338208 1255403 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:46:14.338230 1255403 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:46:14.338237 1255403 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:46:14.339135 1255403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:46:14.339150 1255403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:46:14.339201 1255403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:46:14.346631 1255403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:46:14.347092 1255403 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.347204 1255403 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "functional-608344" cluster setting kubeconfig missing "functional-608344" context setting]
	I1217 00:46:14.347476 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.347923 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.348081 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.348643 1255403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:46:14.348662 1255403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:46:14.348668 1255403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:46:14.348676 1255403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:46:14.348680 1255403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:46:14.348726 1255403 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:46:14.348987 1255403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:46:14.356813 1255403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:46:14.356847 1255403 kubeadm.go:602] duration metric: took 17.690718ms to restartPrimaryControlPlane
	I1217 00:46:14.356857 1255403 kubeadm.go:403] duration metric: took 54.958395ms to StartCluster
	I1217 00:46:14.356874 1255403 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.356946 1255403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.357542 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.357832 1255403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:46:14.358027 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:14.358068 1255403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:46:14.358138 1255403 addons.go:70] Setting storage-provisioner=true in profile "functional-608344"
	I1217 00:46:14.358151 1255403 addons.go:239] Setting addon storage-provisioner=true in "functional-608344"
	I1217 00:46:14.358176 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.358595 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.359037 1255403 addons.go:70] Setting default-storageclass=true in profile "functional-608344"
	I1217 00:46:14.359062 1255403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-608344"
	I1217 00:46:14.359347 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.363164 1255403 out.go:179] * Verifying Kubernetes components...
	I1217 00:46:14.370109 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:14.395757 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.395920 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.396204 1255403 addons.go:239] Setting addon default-storageclass=true in "functional-608344"
	I1217 00:46:14.396233 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.396651 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.400122 1255403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:46:14.403014 1255403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.403037 1255403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:46:14.403100 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.432348 1255403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:14.432368 1255403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:46:14.432430 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.436192 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.459745 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.589788 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:14.612125 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.615872 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.372010 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372004 1255403 node_ready.go:35] waiting up to 6m0s for node "functional-608344" to be "Ready" ...
	W1217 00:46:15.372050 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372084 1255403 retry.go:31] will retry after 317.407291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372123 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.372180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.372127 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.372222 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372230 1255403 retry.go:31] will retry after 355.943922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:15.690082 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:15.728590 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.752296 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.756079 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.756112 1255403 retry.go:31] will retry after 490.658856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794006 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.794063 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794090 1255403 retry.go:31] will retry after 355.367864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.872347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.150146 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.223269 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.227406 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.227444 1255403 retry.go:31] will retry after 644.228248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.247645 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.305567 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.309114 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.309147 1255403 retry.go:31] will retry after 583.888251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.372333 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.372417 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872396 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.872762 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872991 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.894225 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.973490 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.973584 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.973617 1255403 retry.go:31] will retry after 498.903187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995507 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.995580 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995609 1255403 retry.go:31] will retry after 1.192163017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.373109 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.373180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.373508 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:17.373561 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:17.473767 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:17.533566 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:17.533674 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.533701 1255403 retry.go:31] will retry after 1.256860103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.873264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.873742 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.188247 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:18.252406 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.256687 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.256719 1255403 retry.go:31] will retry after 1.144811642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.373049 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.373118 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.373371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.790823 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:18.844402 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.847927 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.847962 1255403 retry.go:31] will retry after 2.632795947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.873097 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.873200 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.873479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:19.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.373274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.373606 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:19.373688 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:19.401757 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:19.461824 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:19.461875 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.461894 1255403 retry.go:31] will retry after 1.170153632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.872578 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.872951 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.633061 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:20.706366 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:20.706465 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.706522 1255403 retry.go:31] will retry after 4.067917735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.872741 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.872818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.873104 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.372963 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.373230 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.481608 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:21.538429 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:21.542236 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.542268 1255403 retry.go:31] will retry after 2.033886089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.872800 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.873226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:21.873281 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:22.372860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.372933 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.373246 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:22.872860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.872932 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.873275 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.373010 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.373315 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.576715 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:23.645527 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:23.650062 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.650092 1255403 retry.go:31] will retry after 3.729491652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.872758 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.872840 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.873179 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:24.372935 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.373006 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:24.373329 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:24.774870 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:24.835617 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:24.839228 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.839262 1255403 retry.go:31] will retry after 3.072905013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.872619 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.872702 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.873062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.372995 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.373306 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.873005 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.873083 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.873336 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:26.373211 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.373294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.373696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:26.373764 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:26.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.872371 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.372236 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.372311 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.372626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.380005 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:27.448256 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.448292 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.448311 1255403 retry.go:31] will retry after 5.461633916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.872981 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.873109 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.873476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.912882 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:27.976246 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.976284 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.976302 1255403 retry.go:31] will retry after 5.882789745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:28.373014 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.373087 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.373404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:28.873209 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.873722 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:28.873779 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:29.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.372386 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.372743 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:29.872630 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.872744 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.873074 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.373208 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.872993 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.873065 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.873363 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:31.373163 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.373238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:31.373629 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:31.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.372266 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.372712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.872416 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.872562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.910180 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:32.967065 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:32.970705 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:32.970737 1255403 retry.go:31] will retry after 5.90385417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.372205 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.372281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.372548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:33.859276 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:33.872587 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.872665 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.872976 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:33.873029 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:33.917348 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:33.917388 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.917407 1255403 retry.go:31] will retry after 6.782848909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:34.373058 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.373482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:34.872326 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.872779 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.372469 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.372549 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.372888 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.872424 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:36.372415 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.372487 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.372800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:36.372853 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:36.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.372265 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.372352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.372682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.872370 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.872441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.372314 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.872216 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.872649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:38.872714 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:38.874746 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:38.934878 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:38.934918 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:38.934938 1255403 retry.go:31] will retry after 11.915569958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:39.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.372630 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:39.872679 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.872752 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.873071 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.372962 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.373071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.700947 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:40.758642 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:40.762387 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.762417 1255403 retry.go:31] will retry after 21.268770127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.872611 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.872685 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.872948 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:40.872988 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:41.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.372862 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:41.872999 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.873072 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.873406 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.373262 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.373529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.872690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:43.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:43.372775 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:43.872456 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.872527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.872851 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.372890 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.372962 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.373276 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.873274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:45.373130 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.373198 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.373481 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:45.373531 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:45.872183 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.872255 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.872577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.372267 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.872217 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.872602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.372685 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.872386 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.872467 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:47.872889 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:48.372210 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.372282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:48.872321 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.872397 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.372328 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.372788 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.872573 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.872652 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.872990 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:49.873044 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:50.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.372858 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.850773 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:50.873153 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.873230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.873507 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.907175 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:50.910769 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:50.910800 1255403 retry.go:31] will retry after 16.247326027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:51.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.372590 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:51.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:52.372397 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:52.372848 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:52.872212 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.372690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.872298 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:54.372770 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.372844 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.373109 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:54.373151 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:54.872853 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.873266 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.372618 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.373044 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.872847 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.872929 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.873202 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:56.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.373168 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:56.373526 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:56.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.372334 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.372403 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.872318 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.872429 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.872507 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.872821 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:58.872881 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:59.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:59.872494 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.872570 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.872923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.372776 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.872467 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.872542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:00.873000 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:01.372521 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.372606 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.372957 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:01.872597 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.872682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.032382 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:02.090439 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:02.094499 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.094532 1255403 retry.go:31] will retry after 29.296113507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.372921 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.373278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.873066 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.873160 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.873482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:02.873541 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:03.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.372609 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:03.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.872375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.372804 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.372891 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.373254 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.872959 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.873031 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:05.373086 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:05.373545 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:05.873124 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.873196 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.372203 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.372559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.872726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.159163 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:07.225140 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:07.225182 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.225201 1255403 retry.go:31] will retry after 37.614827372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.372479 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.372553 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.872303 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.872631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:07.872689 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:08.372299 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.372372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.372708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:08.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.872715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.372447 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.372796 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.872827 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.872905 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:09.873268 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:10.373092 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.373486 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:10.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.872225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.872500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.372299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.872706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:12.372374 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.372448 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:12.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:12.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.872684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.372393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.872165 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.872238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.872504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:14.372605 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.372683 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.372964 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:14.373015 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:14.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.873343 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.373252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.373582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.872301 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.872748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.872394 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:16.872757 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:17.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.372831 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:17.872519 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.872615 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.372225 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.872399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.872750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:18.872814 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:19.372283 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.372698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:19.872700 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.872787 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.873056 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.372790 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.372868 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.373159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.873262 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:20.873320 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:21.372905 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.372976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.373258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:21.873023 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.873458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.373158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.373237 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.373596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.872662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:23.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.372703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:23.372759 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:23.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.372755 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.372830 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.373106 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.873063 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.873147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.873454 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:25.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.373530 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:25.373584 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:25.873160 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.873234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.873504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.372211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.372293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.372638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.372392 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.872655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:27.872704 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:28.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.372383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:28.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.872629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.372366 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.372807 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.872715 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.872791 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:29.873212 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:30.372938 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.373018 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.373277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:30.873056 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.873139 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.873488 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.372198 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.372618 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.391812 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:31.449248 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:31.449293 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.449314 1255403 retry.go:31] will retry after 32.643249775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.872710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.872786 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.873055 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:32.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.372938 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.373285 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:32.373340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:32.873121 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.873217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.873546 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.372605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.873008 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.873404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:34.873456 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:35.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.373619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:35.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.872264 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:37.372434 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.372516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:37.372976 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:37.872362 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.872747 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.372444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.372518 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.372848 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.872586 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.873000 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:39.372699 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.372776 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.373049 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:39.373096 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:39.872846 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.373177 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.373253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.373595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.872211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.872651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.372652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.872246 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.872325 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.872669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:41.872726 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:42.372381 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.372711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:42.872265 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.872338 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.872482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:43.872795 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:44.372769 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.372846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.373174 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.841021 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:44.872821 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.872904 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.873176 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.901181 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901219 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901313 1255403 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:47:45.372791 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.372857 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:45.872997 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.873071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.873409 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:45.873479 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:46.372167 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.372668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:46.872419 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.872488 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.872764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.372445 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.372517 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.372854 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.872444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.872552 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.872905 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:48.372585 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.372659 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.372975 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:48.373027 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:48.872695 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.872773 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.873117 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.372676 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.372750 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.872988 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.873056 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.873314 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:50.373106 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.373187 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.373532 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:50.373602 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:50.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.872393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.872755 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.372443 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.372822 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.872619 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.872982 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.372733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.872278 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:52.872651 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:53.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.372739 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:53.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.872729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.372572 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.372934 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.872918 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:54.873327 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:55.373081 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:55.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.372327 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.372740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.872476 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.872974 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:57.372728 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.372818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.373081 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:57.373130 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:57.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.872949 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.873273 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.373071 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.373147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.872181 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.872282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.872778 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.372499 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.372573 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.372928 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.872817 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.872915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.873279 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:59.873340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:00.373167 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.373598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:00.872322 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.872396 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.372325 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.372746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:02.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:02.372982 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:02.872650 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.872731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.873080 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.372870 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.372941 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.373206 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.872565 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.872662 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.872994 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:04.093431 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:48:04.161956 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165693 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165804 1255403 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:48:04.168987 1255403 out.go:179] * Enabled addons: 
	I1217 00:48:04.172517 1255403 addons.go:530] duration metric: took 1m49.814444692s for enable addons: enabled=[]
	I1217 00:48:04.372853 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.372931 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:04.373316 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:04.872985 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.873066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.873348 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.373121 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.373201 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.373539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.873175 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.873252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.873567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.372269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.372345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.372632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.872369 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.872456 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.872833 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:06.872898 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:07.372604 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.373010 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:07.872787 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.872855 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.873139 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.372993 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.373331 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.873147 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.873586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:08.873687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:09.373212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.373288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.373540 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:09.872555 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.872628 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.872945 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:10.372282 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:48:10.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.872369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.872634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:11.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.372756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:11.372815 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:11.872507 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.873053 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.372797 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.372889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.373152 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.872908 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.872978 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.873325 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:13.373184 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.373269 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.373620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:13.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:13.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.872636 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.873084 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.372598 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.372682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.373038 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.872960 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.873043 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.873401 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.373180 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.872199 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:15.872674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:16.372365 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.372441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.372748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:16.872398 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.872472 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.372683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.872389 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.872465 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.872803 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:17.872859 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:18.372488 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.372894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:18.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.372327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.872501 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.872865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:19.872907 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:20.872498 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.872906 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.372218 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.872390 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.872727 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:22.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.372529 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.372835 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:22.372884 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:22.872512 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.872593 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.872860 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.372249 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.372651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.872324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.372825 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.872830 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.873278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:24.873332 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:25.373061 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.373140 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.373479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:25.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.872230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.872535 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.872474 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.872823 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:27.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.372554 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:27.372599 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:27.872258 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.372395 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.872546 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:29.372522 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.372981 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:29.373040 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:29.872933 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.873371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.372154 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.372225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.372485 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.872261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.872617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.372737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.872313 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.872382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.872638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:31.872679 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:32.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.372650 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:32.872346 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.872430 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.872800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.372247 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.372612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.872344 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.872746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:33.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:34.372760 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.372837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.373165 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:34.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.873403 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.372796 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.372872 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.873006 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.873411 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:35.873470 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:36.372142 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.372217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.372567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:36.872286 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.872683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.372367 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.372445 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.372772 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.872328 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.872704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:38.372276 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:38.372765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:38.872447 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.872533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.872877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.372388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.872617 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.872700 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.873011 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:40.372798 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.372870 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.373242 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:40.373311 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:40.873046 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.873122 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.873375 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.373263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.372227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.372622 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.872266 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.872342 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.872665 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:42.872728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:43.372382 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.372462 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:43.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.872545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.372527 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.372936 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.872877 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.872970 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.873320 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:44.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:45.373112 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.373189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.373444 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:45.872210 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.872286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:47.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:47.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:47.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.372340 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.372414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.872370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:49.372492 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.372567 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:49.372966 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:49.872750 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.872825 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.873079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.372959 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.373332 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.873100 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.873177 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.873506 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.372287 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.372545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.872736 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:51.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:52.372475 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.372896 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:52.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.872302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.872605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.372717 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.872428 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.872516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:53.872960 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:54.372871 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.373201 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:54.872863 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.872939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.373131 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.373475 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.873120 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.873191 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.873448 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:55.873490 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:56.372189 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.372265 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.372594 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:56.872333 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.872410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.872761 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.372437 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:58.372941 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:58.872213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.872596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.372693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.872686 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.872770 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.873119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:00.372972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.373055 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:00.373445 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:00.873193 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.873272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.873619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.372350 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.372429 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.872311 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.872383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.372669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.872388 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.872461 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.872782 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:02.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:03.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.372629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:03.872320 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.872407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.872766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.372726 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.372819 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.872850 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.873211 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:04.873257 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:05.373033 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.373116 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.373435 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:05.872186 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.872632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.372288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.372541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:07.372231 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:07.372674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:07.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.872292 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:09.372406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.372481 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:09.372828 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:09.872733 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.872813 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.873142 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.372830 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.372915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.872991 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.873061 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:11.373032 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.373103 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.373422 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:11.373476 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:11.872170 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.872253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.872591 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.872348 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.872733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.372664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.872559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:13.872604 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:14.372593 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.372675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.373023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:14.872826 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.372996 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.373251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.873029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.873114 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.873441 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:15.873499 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:16.372857 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.372939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.373291 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:16.873054 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.873121 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.873381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.373207 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.373279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.373602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.872327 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:18.372423 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.372828 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:18.372879 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:18.872517 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.872599 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.372645 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.372727 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.373052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.872972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.873040 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.873299 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:20.373140 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.373222 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.373562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:20.373608 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:20.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.872744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.372226 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.372710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.872402 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.872479 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:22.872899 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:23.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.372750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:23.872449 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.872526 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.372969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.872889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.872966 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.873311 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:24.873367 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:25.373118 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.373197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.373542 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:25.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.872610 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.872479 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.872558 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:27.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.372614 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:27.372676 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:27.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.872335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.372259 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.872235 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.872639 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.372353 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.372446 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:29.372846 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:29.872795 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.872878 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.372902 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.372971 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.873091 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.873167 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.873517 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.372356 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.372729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.872477 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.872758 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.872805 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:32.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.872415 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.872791 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.372533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.372866 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.872644 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.872953 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.873001 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.372931 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.373361 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.872791 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.872863 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.873115 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.372903 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.372977 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.373328 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.873103 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.873179 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:35.873583 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.372312 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.372627 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.872237 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.372274 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.372348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.372749 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.872850 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.372223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.372290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.372539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.872618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.872954 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.372336 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.372418 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.372807 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:40.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.372246 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.872775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.372333 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.372608 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.872731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:42.872782 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.372494 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.372595 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.372923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.872605 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.872678 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.873001 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.373029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.373105 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.873220 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.873305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.873597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:44.873668 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.372245 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.372641 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.872360 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.872431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.872757 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.372476 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.372555 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.372874 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.872352 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.872442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.872756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.372843 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:47.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.872400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.872738 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.372270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.372525 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.872233 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.872305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.872652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.372364 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.372725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.872595 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.872677 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.872968 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.873012 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.372323 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.872283 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.372362 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.372439 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.872417 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.872793 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.372402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.372781 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.372837 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.872499 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.872576 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.872861 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.372337 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.872497 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.872880 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.372942 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.373033 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.373327 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.373380 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.872873 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.872946 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.873289 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.373144 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.373221 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.373534 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.872319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.872613 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.872664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.872724 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.372361 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.372707 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.872486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.872824 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.372528 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.372963 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.872621 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.873021 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.873080 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.372773 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.372851 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.873119 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.873197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.873526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.872368 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.872754 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:01.372719 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.872316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.872587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.372293 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.372385 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.372341 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.372718 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.372786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.872930 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.373171 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.373565 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.872563 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.872640 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.872400 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.872830 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.872896 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:06.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.372620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.872307 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.372442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.372532 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.372865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.872303 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.872568 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.372604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:08.372650 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.872368 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.372413 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.372486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.372844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.872786 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.873227 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.372862 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.372935 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.373226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:10.373272 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.872953 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.373164 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.373473 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.873198 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.873284 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.372715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.872568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.872993 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:12.873048 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:13.372927 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.873165 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.873240 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.873498 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.372301 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.372407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.372871 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.872754 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.872837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.873190 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.873248 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:15.372993 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.373063 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.873087 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.373215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.373295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.373634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.872583 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.372302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:17.372792 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:17.872468 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.872545 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.872894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.372588 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.372657 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.372315 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.872564 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.872648 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:19.873002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.872700 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.372611 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.372691 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.372973 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.872655 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.872734 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.873073 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.873119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:22.372896 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.372972 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.373287 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.873079 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.373186 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.373280 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.373600 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.872716 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.372595 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.372669 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.372947 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:24.373002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.872867 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.872947 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.373095 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.373171 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.872191 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.872266 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.872527 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.872403 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.872502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:26.872890 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:27.372542 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.372621 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.372944 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.872693 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.872780 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.873112 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.372992 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.873156 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.873541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.873590 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:29.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.872635 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.872959 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.372576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.872271 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.872677 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.372257 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:31.372730 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.372339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.872735 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.372527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.372826 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:33.372874 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.372580 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.372987 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.872892 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.872961 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.873231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.372626 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.372701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.373063 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:35.373119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.872891 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.872974 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.873309 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.373075 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.373152 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.872187 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.872267 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.872563 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.872562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:37.872611 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:38.372261 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.372341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.372684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.872478 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.372517 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.372586 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.372901 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.872823 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.872906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:39.873307 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:40.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.373133 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.373501 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.872204 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.872270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.872526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.372331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.872493 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.372820 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:42.372870 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:42.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.372358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.372675 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.372764 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.373089 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:44.373137 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.873156 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.873500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.372553 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.372450 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.372523 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.372843 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.872247 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.872328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.872612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.872662 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:47.372273 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.872442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.872571 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.872914 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.372316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.872708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.872770 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:49.372262 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.872541 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.372663 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:51.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:51.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.872354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.872701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.372417 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.372845 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.872527 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.872603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.872927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.372268 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:53.372745 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.872425 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.872508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.872834 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.372720 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.372797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.373062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.872869 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.872951 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.373548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:55.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.872601 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.872374 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.872455 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.872814 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.372544 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.872786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.372890 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.872591 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.872679 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.873009 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.372810 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.372884 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.373220 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.872879 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.872969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.873321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:00.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.373766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.372378 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.872219 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.872561 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.372674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:02.372728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.372369 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.372442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.372744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.372647 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.372731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.373140 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:04.373195 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.872948 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.873032 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.873333 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.373154 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.373234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.373560 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.872279 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.872349 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.872425 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.872765 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.872824 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:07.372493 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.372568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.372917 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.872232 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.872304 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.872709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.372217 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:09.372636 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:09.872553 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.872630 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.873023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.372813 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.372913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.873108 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.873408 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.373293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:11.373634 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:11.872336 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.372228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.372302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.372577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.872294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.372401 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.372816 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.872551 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:13.872945 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:14.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.373321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.872852 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.372992 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.373066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.373324 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.873205 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.873281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.873678 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:16.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.372357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.372713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.872413 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.372482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:18.372852 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:18.872514 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.872616 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.872985 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.372573 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.372649 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.372999 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.872895 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.872975 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.873244 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.373182 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.373611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:20.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:20.872380 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.872463 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.872815 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.372512 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.372596 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.872254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.872331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.872674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.372410 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.372485 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.372838 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:22.872700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:23.372313 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.372431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.872457 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.872534 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.872889 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.372864 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.372934 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.373193 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.873012 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.873516 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:24.873575 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:25.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.372801 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.872339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.372699 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.872323 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.372339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.372411 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:27.372716 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:27.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.372837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.872230 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.872576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:29.372758 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:29.872531 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.872638 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.872972 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.372756 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.372841 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.373119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.872942 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.873350 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.373103 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.373183 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:31.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.872222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.872307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.872623 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.372375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.872367 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.872693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.372597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:33.872742 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:34.372680 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.372755 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.373097 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.872882 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.872958 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.873222 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.373010 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.373091 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.373434 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.873113 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.873189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.873528 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.873587 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:36.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.372619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.872253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.872672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.372647 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.872206 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.872274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.872529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:38.372720 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.872325 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.872409 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.872740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.372402 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.372775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.872763 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.872846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.873157 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.372823 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.372906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:40.373285 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.873058 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.873128 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.372149 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.372247 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.372579 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.372607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.872312 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.872392 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.872710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:42.872765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:43.372447 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.372542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.372852 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.872586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.372513 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.372585 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.372919 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.872748 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.872828 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.873215 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:45.372934 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.373011 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.373274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.873496 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.872225 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.872584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.372332 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.372633 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:47.372687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.372256 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.872433 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.872737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.372366 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.372695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:49.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.872713 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.872797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.873197 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.372974 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.373045 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.373414 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.872184 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.872263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.872626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.872387 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:51.872772 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:52.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.372289 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.372503 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.372578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.372841 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:54.372883 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:54.872831 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.873203 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.372953 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.373030 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.373369 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.873134 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.873209 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.873469 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.372169 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.372249 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.372599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.872338 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.872414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:56.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:57.372465 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.372538 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.372790 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.872250 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.872326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.872637 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.372760 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:59.872577 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.873052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.377171 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.377261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.377582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.872322 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.872642 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.872615 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:01.872654 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.372306 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.872696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.372342 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.372415 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.872358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:03.872747 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.872938 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.873008 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.873277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.373195 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.872300 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.872635 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.372224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.372666 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:06.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.872698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.372405 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.372492 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.372840 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.872529 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.872598 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.372280 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:08.372751 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:08.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.872352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.372420 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.872807 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.872889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.373055 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:10.373550 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:10.872220 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.872301 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.872593 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.372352 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.372759 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.872616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.872391 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:12.872789 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:13.372490 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.372574 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.372922 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.872608 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.872675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.872937 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.372532 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.372618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.373079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.872885 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.872973 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.873356 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:14.873435 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:15.372134 1255403 node_ready.go:38] duration metric: took 6m0.000083316s for node "functional-608344" to be "Ready" ...
	I1217 00:52:15.375301 1255403 out.go:203] 
	W1217 00:52:15.378227 1255403 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:52:15.378247 1255403 out.go:285] * 
	* 
	W1217 00:52:15.380407 1255403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:52:15.382698 1255403 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-608344 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.049272281s for "functional-608344" cluster.
I1217 00:52:15.846686 1211243 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (334.22736ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/12112432.pem                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image save --daemon kicbase/echo-server:functional-416001 --alsologtostderr                                                           │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /usr/share/ca-certificates/12112432.pem                                                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/test/nested/copy/1211243/hosts                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp functional-416001:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170430960/001/cp-test.txt                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format short --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format yaml --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                               │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh pgrep buildkitd                                                                                                                   │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /tmp/does/not/exist/cp-test.txt                                                                     │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:46:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:46:09.841325 1255403 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:46:09.841557 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841588 1255403 out.go:374] Setting ErrFile to fd 2...
	I1217 00:46:09.841608 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841909 1255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:46:09.842319 1255403 out.go:368] Setting JSON to false
	I1217 00:46:09.843208 1255403 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23320,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:46:09.843304 1255403 start.go:143] virtualization:  
	I1217 00:46:09.846714 1255403 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:46:09.849718 1255403 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:46:09.849800 1255403 notify.go:221] Checking for updates...
	I1217 00:46:09.855303 1255403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:46:09.858207 1255403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:09.860971 1255403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:46:09.863762 1255403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:46:09.866648 1255403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:46:09.869965 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:09.870075 1255403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:46:09.899794 1255403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:46:09.899910 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:09.954202 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:09.945326941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:09.954303 1255403 docker.go:319] overlay module found
	I1217 00:46:09.957332 1255403 out.go:179] * Using the docker driver based on existing profile
	I1217 00:46:09.960126 1255403 start.go:309] selected driver: docker
	I1217 00:46:09.960147 1255403 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:09.960238 1255403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:46:09.960367 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:10.027336 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:10.013273525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:10.027811 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:10.027879 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:10.027939 1255403 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:10.033595 1255403 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:46:10.036654 1255403 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:46:10.039839 1255403 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:46:10.042883 1255403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:46:10.042915 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:10.042969 1255403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:46:10.042980 1255403 cache.go:65] Caching tarball of preloaded images
	I1217 00:46:10.043067 1255403 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:46:10.043077 1255403 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:46:10.043192 1255403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:46:10.064109 1255403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:46:10.064135 1255403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:46:10.064157 1255403 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:46:10.064192 1255403 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:46:10.064257 1255403 start.go:364] duration metric: took 41.379µs to acquireMachinesLock for "functional-608344"
	I1217 00:46:10.064326 1255403 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:46:10.064336 1255403 fix.go:54] fixHost starting: 
	I1217 00:46:10.064635 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:10.082218 1255403 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:46:10.082251 1255403 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:46:10.085538 1255403 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:46:10.085593 1255403 machine.go:94] provisionDockerMachine start ...
	I1217 00:46:10.085773 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.104030 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.104380 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.104395 1255403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:46:10.233303 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.233328 1255403 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:46:10.233404 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.250839 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.251149 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.251164 1255403 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:46:10.396645 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.396749 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.422445 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.422746 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.422762 1255403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:46:10.553926 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:46:10.553954 1255403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:46:10.554002 1255403 ubuntu.go:190] setting up certificates
	I1217 00:46:10.554025 1255403 provision.go:84] configureAuth start
	I1217 00:46:10.554113 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:10.571790 1255403 provision.go:143] copyHostCerts
	I1217 00:46:10.571842 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571886 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:46:10.571897 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571976 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:46:10.572067 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572088 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:46:10.572098 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572127 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:46:10.572172 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572192 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:46:10.572198 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572222 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:46:10.572274 1255403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:46:10.693030 1255403 provision.go:177] copyRemoteCerts
	I1217 00:46:10.693099 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:46:10.693140 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.710526 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.805595 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:46:10.805709 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:46:10.823672 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:46:10.823734 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:46:10.841740 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:46:10.841805 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:46:10.859736 1255403 provision.go:87] duration metric: took 305.682111ms to configureAuth
	I1217 00:46:10.859764 1255403 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:46:10.859948 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:10.859960 1255403 machine.go:97] duration metric: took 774.357768ms to provisionDockerMachine
	I1217 00:46:10.859968 1255403 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:46:10.859979 1255403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:46:10.860038 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:46:10.860081 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.876877 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.973995 1255403 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:46:10.977418 1255403 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:46:10.977440 1255403 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:46:10.977445 1255403 command_runner.go:130] > VERSION_ID="12"
	I1217 00:46:10.977450 1255403 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:46:10.977468 1255403 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:46:10.977472 1255403 command_runner.go:130] > ID=debian
	I1217 00:46:10.977477 1255403 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:46:10.977482 1255403 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:46:10.977488 1255403 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:46:10.977542 1255403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:46:10.977565 1255403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:46:10.977576 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:46:10.977631 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:46:10.977740 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:46:10.977753 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /etc/ssl/certs/12112432.pem
	I1217 00:46:10.977836 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:46:10.977845 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> /etc/test/nested/copy/1211243/hosts
	I1217 00:46:10.977888 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:46:10.985858 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:11.003616 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:46:11.025062 1255403 start.go:296] duration metric: took 165.078815ms for postStartSetup
	I1217 00:46:11.025171 1255403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:46:11.025235 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.042501 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.135058 1255403 command_runner.go:130] > 18%
	I1217 00:46:11.135791 1255403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:46:11.141537 1255403 command_runner.go:130] > 159G
	I1217 00:46:11.142252 1255403 fix.go:56] duration metric: took 1.077909712s for fixHost
	I1217 00:46:11.142316 1255403 start.go:83] releasing machines lock for "functional-608344", held for 1.07800111s
	I1217 00:46:11.142412 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:11.164178 1255403 ssh_runner.go:195] Run: cat /version.json
	I1217 00:46:11.164239 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.164497 1255403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:46:11.164553 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.196976 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.203865 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.389604 1255403 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:46:11.389719 1255403 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:46:11.389906 1255403 ssh_runner.go:195] Run: systemctl --version
	I1217 00:46:11.396314 1255403 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:46:11.396351 1255403 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:46:11.396781 1255403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:46:11.401747 1255403 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:46:11.401791 1255403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:46:11.401850 1255403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:46:11.410012 1255403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:46:11.410035 1255403 start.go:496] detecting cgroup driver to use...
	I1217 00:46:11.410068 1255403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:46:11.410119 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:46:11.427912 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:46:11.441702 1255403 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:46:11.441797 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:46:11.458922 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:46:11.473296 1255403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:46:11.602661 1255403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:46:11.727834 1255403 docker.go:234] disabling docker service ...
	I1217 00:46:11.727932 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:46:11.743775 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:46:11.756449 1255403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:46:11.884208 1255403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:46:12.041744 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:46:12.055323 1255403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:46:12.069025 1255403 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:46:12.070254 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:46:12.080613 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:46:12.090397 1255403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:46:12.090539 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:46:12.100248 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.110370 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:46:12.120135 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.130289 1255403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:46:12.139404 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:46:12.148731 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:46:12.158190 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:46:12.167677 1255403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:46:12.175393 1255403 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:46:12.175487 1255403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:46:12.183394 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.301782 1255403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:46:12.439684 1255403 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:46:12.439765 1255403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:46:12.443346 1255403 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 00:46:12.443371 1255403 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:46:12.443378 1255403 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1217 00:46:12.443385 1255403 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:12.443391 1255403 command_runner.go:130] > Access: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443396 1255403 command_runner.go:130] > Modify: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443401 1255403 command_runner.go:130] > Change: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443405 1255403 command_runner.go:130] >  Birth: -
	I1217 00:46:12.443632 1255403 start.go:564] Will wait 60s for crictl version
	I1217 00:46:12.443703 1255403 ssh_runner.go:195] Run: which crictl
	I1217 00:46:12.446726 1255403 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:46:12.447174 1255403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:46:12.472886 1255403 command_runner.go:130] > Version:  0.1.0
	I1217 00:46:12.473228 1255403 command_runner.go:130] > RuntimeName:  containerd
	I1217 00:46:12.473244 1255403 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 00:46:12.473249 1255403 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:46:12.475292 1255403 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:46:12.475358 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.494552 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.496407 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.517873 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.525827 1255403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:46:12.528776 1255403 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:46:12.544531 1255403 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:46:12.548354 1255403 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:46:12.548680 1255403 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:46:12.548798 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:12.548865 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.573132 1255403 command_runner.go:130] > {
	I1217 00:46:12.573158 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.573163 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573172 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.573185 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573191 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.573195 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573199 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573208 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.573215 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573220 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.573226 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573230 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573234 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573237 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573252 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.573259 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573265 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.573268 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573273 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573284 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.573288 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573292 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.573296 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573300 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573306 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573310 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573323 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.573327 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573333 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.573339 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573350 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573361 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.573365 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573371 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.573376 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.573379 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573385 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573389 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573398 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.573404 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573409 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.573412 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573418 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573426 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.573432 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573437 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.573440 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573446 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573449 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573455 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573459 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573465 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573468 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573475 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.573478 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573484 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.573490 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573494 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573504 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.573508 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573512 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.573521 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573529 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573541 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573546 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573551 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573555 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573560 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573567 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.573574 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573580 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.573583 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573590 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573598 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.573605 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573609 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.573613 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573622 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573625 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573629 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573634 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573660 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573664 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573671 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.573681 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573690 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.573694 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573698 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573710 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.573714 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573719 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.573725 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573729 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573733 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573736 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573743 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.573753 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573759 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.573762 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573765 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573773 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.573776 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573784 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.573790 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573794 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573800 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573804 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573816 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573819 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573822 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573830 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.573836 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573842 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.573845 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573851 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573859 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.573864 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573868 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.573875 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573879 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.573884 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573888 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573894 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.573897 1255403 command_runner.go:130] >     }
	I1217 00:46:12.573900 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.573903 1255403 command_runner.go:130] > }
	I1217 00:46:12.574073 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.574086 1255403 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:46:12.574147 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.596238 1255403 command_runner.go:130] > {
	I1217 00:46:12.596261 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.596266 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596284 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.596300 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596310 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.596314 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596318 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596329 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.596337 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596342 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.596346 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596353 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596356 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596362 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596372 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.596380 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596386 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.596389 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596393 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596402 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.596408 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596413 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.596417 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596422 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596427 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596432 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596442 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.596446 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596451 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.596457 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596464 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596472 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.596477 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596482 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.596486 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.596492 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596500 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596506 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596513 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.596518 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596523 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.596529 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596533 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596540 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.596547 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596551 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.596554 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596569 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596572 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596577 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596585 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596591 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596594 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596622 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.596626 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596638 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.596641 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596645 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596659 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.596662 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596667 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.596673 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596683 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596690 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596694 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596697 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596707 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596710 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596717 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.596726 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596733 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.596739 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596743 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596751 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.596755 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596761 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.596765 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596771 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596775 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596784 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596788 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596791 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596795 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596808 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.596813 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596818 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.596824 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596828 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596836 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.596839 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596847 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.596853 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596857 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596863 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596866 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596873 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.596879 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596885 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.596889 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596900 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596908 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.596914 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596923 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.596927 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596931 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596936 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596940 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596947 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596950 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596953 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596960 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.596967 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596971 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.596975 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596981 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596989 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.596996 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.597000 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.597004 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.597008 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.597013 1255403 command_runner.go:130] >       },
	I1217 00:46:12.597023 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.597027 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.597030 1255403 command_runner.go:130] >     }
	I1217 00:46:12.597033 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.597039 1255403 command_runner.go:130] > }
	I1217 00:46:12.599655 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.599676 1255403 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:46:12.599685 1255403 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:46:12.599841 1255403 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:46:12.599942 1255403 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:46:12.623140 1255403 command_runner.go:130] > {
	I1217 00:46:12.623159 1255403 command_runner.go:130] >   "cniconfig": {
	I1217 00:46:12.623164 1255403 command_runner.go:130] >     "Networks": [
	I1217 00:46:12.623168 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623173 1255403 command_runner.go:130] >         "Config": {
	I1217 00:46:12.623178 1255403 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 00:46:12.623184 1255403 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 00:46:12.623188 1255403 command_runner.go:130] >           "Plugins": [
	I1217 00:46:12.623192 1255403 command_runner.go:130] >             {
	I1217 00:46:12.623196 1255403 command_runner.go:130] >               "Network": {
	I1217 00:46:12.623200 1255403 command_runner.go:130] >                 "ipam": {},
	I1217 00:46:12.623205 1255403 command_runner.go:130] >                 "type": "loopback"
	I1217 00:46:12.623209 1255403 command_runner.go:130] >               },
	I1217 00:46:12.623214 1255403 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 00:46:12.623218 1255403 command_runner.go:130] >             }
	I1217 00:46:12.623221 1255403 command_runner.go:130] >           ],
	I1217 00:46:12.623230 1255403 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 00:46:12.623234 1255403 command_runner.go:130] >         },
	I1217 00:46:12.623239 1255403 command_runner.go:130] >         "IFName": "lo"
	I1217 00:46:12.623243 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623246 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623250 1255403 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 00:46:12.623253 1255403 command_runner.go:130] >     "PluginDirs": [
	I1217 00:46:12.623257 1255403 command_runner.go:130] >       "/opt/cni/bin"
	I1217 00:46:12.623260 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623265 1255403 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 00:46:12.623269 1255403 command_runner.go:130] >     "Prefix": "eth"
	I1217 00:46:12.623272 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623284 1255403 command_runner.go:130] >   "config": {
	I1217 00:46:12.623288 1255403 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 00:46:12.623292 1255403 command_runner.go:130] >       "/etc/cdi",
	I1217 00:46:12.623297 1255403 command_runner.go:130] >       "/var/run/cdi"
	I1217 00:46:12.623300 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623303 1255403 command_runner.go:130] >     "cni": {
	I1217 00:46:12.623306 1255403 command_runner.go:130] >       "binDir": "",
	I1217 00:46:12.623310 1255403 command_runner.go:130] >       "binDirs": [
	I1217 00:46:12.623314 1255403 command_runner.go:130] >         "/opt/cni/bin"
	I1217 00:46:12.623317 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.623322 1255403 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 00:46:12.623325 1255403 command_runner.go:130] >       "confTemplate": "",
	I1217 00:46:12.623329 1255403 command_runner.go:130] >       "ipPref": "",
	I1217 00:46:12.623333 1255403 command_runner.go:130] >       "maxConfNum": 1,
	I1217 00:46:12.623337 1255403 command_runner.go:130] >       "setupSerially": false,
	I1217 00:46:12.623341 1255403 command_runner.go:130] >       "useInternalLoopback": false
	I1217 00:46:12.623344 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623352 1255403 command_runner.go:130] >     "containerd": {
	I1217 00:46:12.623356 1255403 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 00:46:12.623361 1255403 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 00:46:12.623366 1255403 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 00:46:12.623369 1255403 command_runner.go:130] >       "runtimes": {
	I1217 00:46:12.623372 1255403 command_runner.go:130] >         "runc": {
	I1217 00:46:12.623377 1255403 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 00:46:12.623381 1255403 command_runner.go:130] >           "PodAnnotations": null,
	I1217 00:46:12.623386 1255403 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 00:46:12.623391 1255403 command_runner.go:130] >           "cgroupWritable": false,
	I1217 00:46:12.623395 1255403 command_runner.go:130] >           "cniConfDir": "",
	I1217 00:46:12.623399 1255403 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 00:46:12.623403 1255403 command_runner.go:130] >           "io_type": "",
	I1217 00:46:12.623406 1255403 command_runner.go:130] >           "options": {
	I1217 00:46:12.623410 1255403 command_runner.go:130] >             "BinaryName": "",
	I1217 00:46:12.623414 1255403 command_runner.go:130] >             "CriuImagePath": "",
	I1217 00:46:12.623421 1255403 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 00:46:12.623426 1255403 command_runner.go:130] >             "IoGid": 0,
	I1217 00:46:12.623429 1255403 command_runner.go:130] >             "IoUid": 0,
	I1217 00:46:12.623434 1255403 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 00:46:12.623437 1255403 command_runner.go:130] >             "Root": "",
	I1217 00:46:12.623441 1255403 command_runner.go:130] >             "ShimCgroup": "",
	I1217 00:46:12.623445 1255403 command_runner.go:130] >             "SystemdCgroup": false
	I1217 00:46:12.623448 1255403 command_runner.go:130] >           },
	I1217 00:46:12.623453 1255403 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 00:46:12.623459 1255403 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 00:46:12.623463 1255403 command_runner.go:130] >           "runtimePath": "",
	I1217 00:46:12.623468 1255403 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 00:46:12.623473 1255403 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 00:46:12.623476 1255403 command_runner.go:130] >           "snapshotter": ""
	I1217 00:46:12.623479 1255403 command_runner.go:130] >         }
	I1217 00:46:12.623483 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623486 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623495 1255403 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 00:46:12.623500 1255403 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 00:46:12.623507 1255403 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 00:46:12.623511 1255403 command_runner.go:130] >     "disableApparmor": false,
	I1217 00:46:12.623517 1255403 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 00:46:12.623522 1255403 command_runner.go:130] >     "disableProcMount": false,
	I1217 00:46:12.623526 1255403 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 00:46:12.623530 1255403 command_runner.go:130] >     "enableCDI": true,
	I1217 00:46:12.623534 1255403 command_runner.go:130] >     "enableSelinux": false,
	I1217 00:46:12.623538 1255403 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 00:46:12.623542 1255403 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 00:46:12.623547 1255403 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 00:46:12.623551 1255403 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 00:46:12.623555 1255403 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 00:46:12.623559 1255403 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 00:46:12.623563 1255403 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 00:46:12.623571 1255403 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623576 1255403 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 00:46:12.623581 1255403 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623585 1255403 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 00:46:12.623590 1255403 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 00:46:12.623593 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623596 1255403 command_runner.go:130] >   "features": {
	I1217 00:46:12.623601 1255403 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 00:46:12.623603 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623607 1255403 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 00:46:12.623617 1255403 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623626 1255403 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623630 1255403 command_runner.go:130] >   "runtimeHandlers": [
	I1217 00:46:12.623632 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623636 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623640 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623645 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623648 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623651 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623654 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623657 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623662 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623666 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623670 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623673 1255403 command_runner.go:130] >       "name": "runc"
	I1217 00:46:12.623676 1255403 command_runner.go:130] >     }
	I1217 00:46:12.623678 1255403 command_runner.go:130] >   ],
	I1217 00:46:12.623682 1255403 command_runner.go:130] >   "status": {
	I1217 00:46:12.623685 1255403 command_runner.go:130] >     "conditions": [
	I1217 00:46:12.623688 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623692 1255403 command_runner.go:130] >         "message": "",
	I1217 00:46:12.623695 1255403 command_runner.go:130] >         "reason": "",
	I1217 00:46:12.623699 1255403 command_runner.go:130] >         "status": true,
	I1217 00:46:12.623708 1255403 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 00:46:12.623711 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623714 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623721 1255403 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 00:46:12.623726 1255403 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 00:46:12.623729 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623733 1255403 command_runner.go:130] >         "type": "NetworkReady"
	I1217 00:46:12.623737 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623739 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623760 1255403 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 00:46:12.623766 1255403 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 00:46:12.623771 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623776 1255403 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 00:46:12.623779 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623782 1255403 command_runner.go:130] >     ]
	I1217 00:46:12.623784 1255403 command_runner.go:130] >   }
	I1217 00:46:12.623787 1255403 command_runner.go:130] > }
	I1217 00:46:12.625494 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:12.625564 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:12.625600 1255403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:46:12.625679 1255403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:46:12.625821 1255403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:46:12.625903 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:46:12.632727 1255403 command_runner.go:130] > kubeadm
	I1217 00:46:12.632744 1255403 command_runner.go:130] > kubectl
	I1217 00:46:12.632749 1255403 command_runner.go:130] > kubelet
	I1217 00:46:12.633544 1255403 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:46:12.633634 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:46:12.641025 1255403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:46:12.653291 1255403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:46:12.665363 1255403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:46:12.678080 1255403 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:46:12.681502 1255403 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:46:12.681599 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.825775 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:13.622571 1255403 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:46:13.622593 1255403 certs.go:195] generating shared ca certs ...
	I1217 00:46:13.622609 1255403 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:13.622746 1255403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:46:13.622792 1255403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:46:13.622803 1255403 certs.go:257] generating profile certs ...
	I1217 00:46:13.622905 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:46:13.622962 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:46:13.623005 1255403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:46:13.623018 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:46:13.623032 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:46:13.623044 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:46:13.623063 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:46:13.623080 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:46:13.623092 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:46:13.623103 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:46:13.623112 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:46:13.623163 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:46:13.623197 1255403 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:46:13.623208 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:46:13.623239 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:46:13.623268 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:46:13.623296 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:46:13.623339 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:13.623376 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem -> /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.623391 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.623403 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.630954 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:46:13.648792 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:46:13.668204 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:46:13.687794 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:46:13.706777 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:46:13.724521 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:46:13.741552 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:46:13.758610 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:46:13.775595 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:46:13.791737 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:46:13.808409 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:46:13.825079 1255403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:46:13.838395 1255403 ssh_runner.go:195] Run: openssl version
	I1217 00:46:13.844664 1255403 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:46:13.845138 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.852395 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:46:13.860295 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864169 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864290 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864356 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.907286 1255403 command_runner.go:130] > b5213941
	I1217 00:46:13.907795 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:46:13.915373 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.922487 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:46:13.929849 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933445 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933486 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933532 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.974007 1255403 command_runner.go:130] > 51391683
	I1217 00:46:13.974086 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:46:13.981522 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.988760 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:46:13.996178 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.999808 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000049 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000110 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.042220 1255403 command_runner.go:130] > 3ec20f2e
	I1217 00:46:14.042784 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:46:14.050625 1255403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054447 1255403 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054541 1255403 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:46:14.054555 1255403 command_runner.go:130] > Device: 259,1	Inode: 1315986     Links: 1
	I1217 00:46:14.054575 1255403 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:14.054585 1255403 command_runner.go:130] > Access: 2025-12-17 00:42:05.487679973 +0000
	I1217 00:46:14.054596 1255403 command_runner.go:130] > Modify: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054601 1255403 command_runner.go:130] > Change: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054606 1255403 command_runner.go:130] >  Birth: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054705 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:46:14.095552 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.096144 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:46:14.136799 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.137343 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:46:14.178363 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.178447 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:46:14.219183 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.219732 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:46:14.260450 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.260974 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:46:14.301394 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.301907 1255403 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:14.302001 1255403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:46:14.302068 1255403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:46:14.331155 1255403 cri.go:89] found id: ""
	I1217 00:46:14.331262 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:46:14.338208 1255403 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:46:14.338230 1255403 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:46:14.338237 1255403 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:46:14.339135 1255403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:46:14.339150 1255403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:46:14.339201 1255403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:46:14.346631 1255403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:46:14.347092 1255403 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.347204 1255403 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "functional-608344" cluster setting kubeconfig missing "functional-608344" context setting]
	I1217 00:46:14.347476 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.347923 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.348081 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.348643 1255403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:46:14.348662 1255403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:46:14.348668 1255403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:46:14.348676 1255403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:46:14.348680 1255403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:46:14.348726 1255403 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:46:14.348987 1255403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:46:14.356813 1255403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:46:14.356847 1255403 kubeadm.go:602] duration metric: took 17.690718ms to restartPrimaryControlPlane
	I1217 00:46:14.356857 1255403 kubeadm.go:403] duration metric: took 54.958395ms to StartCluster
	I1217 00:46:14.356874 1255403 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.356946 1255403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.357542 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.357832 1255403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:46:14.358027 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:14.358068 1255403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:46:14.358138 1255403 addons.go:70] Setting storage-provisioner=true in profile "functional-608344"
	I1217 00:46:14.358151 1255403 addons.go:239] Setting addon storage-provisioner=true in "functional-608344"
	I1217 00:46:14.358176 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.358595 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.359037 1255403 addons.go:70] Setting default-storageclass=true in profile "functional-608344"
	I1217 00:46:14.359062 1255403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-608344"
	I1217 00:46:14.359347 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.363164 1255403 out.go:179] * Verifying Kubernetes components...
	I1217 00:46:14.370109 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:14.395757 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.395920 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.396204 1255403 addons.go:239] Setting addon default-storageclass=true in "functional-608344"
	I1217 00:46:14.396233 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.396651 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.400122 1255403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:46:14.403014 1255403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.403037 1255403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:46:14.403100 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.432348 1255403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:14.432368 1255403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:46:14.432430 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.436192 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.459745 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.589788 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:14.612125 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.615872 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.372010 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372004 1255403 node_ready.go:35] waiting up to 6m0s for node "functional-608344" to be "Ready" ...
	W1217 00:46:15.372050 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372084 1255403 retry.go:31] will retry after 317.407291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372123 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.372180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.372127 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.372222 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372230 1255403 retry.go:31] will retry after 355.943922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:15.690082 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:15.728590 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.752296 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.756079 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.756112 1255403 retry.go:31] will retry after 490.658856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794006 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.794063 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794090 1255403 retry.go:31] will retry after 355.367864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.872347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.150146 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.223269 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.227406 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.227444 1255403 retry.go:31] will retry after 644.228248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.247645 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.305567 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.309114 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.309147 1255403 retry.go:31] will retry after 583.888251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.372333 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.372417 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872396 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.872762 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872991 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.894225 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.973490 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.973584 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.973617 1255403 retry.go:31] will retry after 498.903187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995507 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.995580 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995609 1255403 retry.go:31] will retry after 1.192163017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.373109 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.373180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.373508 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:17.373561 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:17.473767 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:17.533566 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:17.533674 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.533701 1255403 retry.go:31] will retry after 1.256860103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.873264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.873742 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.188247 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:18.252406 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.256687 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.256719 1255403 retry.go:31] will retry after 1.144811642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.373049 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.373118 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.373371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.790823 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:18.844402 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.847927 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.847962 1255403 retry.go:31] will retry after 2.632795947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.873097 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.873200 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.873479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:19.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.373274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.373606 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:19.373688 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:19.401757 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:19.461824 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:19.461875 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.461894 1255403 retry.go:31] will retry after 1.170153632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.872578 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.872951 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.633061 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:20.706366 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:20.706465 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.706522 1255403 retry.go:31] will retry after 4.067917735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.872741 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.872818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.873104 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.372963 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.373230 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.481608 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:21.538429 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:21.542236 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.542268 1255403 retry.go:31] will retry after 2.033886089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.872800 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.873226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:21.873281 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:22.372860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.372933 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.373246 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:22.872860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.872932 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.873275 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.373010 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.373315 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.576715 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:23.645527 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:23.650062 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.650092 1255403 retry.go:31] will retry after 3.729491652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.872758 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.872840 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.873179 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:24.372935 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.373006 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:24.373329 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:24.774870 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:24.835617 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:24.839228 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.839262 1255403 retry.go:31] will retry after 3.072905013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.872619 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.872702 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.873062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.372995 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.373306 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.873005 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.873083 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.873336 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:26.373211 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.373294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.373696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:26.373764 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:26.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.872371 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.372236 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.372311 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.372626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.380005 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:27.448256 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.448292 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.448311 1255403 retry.go:31] will retry after 5.461633916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.872981 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.873109 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.873476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.912882 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:27.976246 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.976284 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.976302 1255403 retry.go:31] will retry after 5.882789745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:28.373014 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.373087 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.373404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:28.873209 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.873722 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:28.873779 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:29.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.372386 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.372743 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:29.872630 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.872744 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.873074 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.373208 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.872993 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.873065 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.873363 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:31.373163 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.373238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:31.373629 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:31.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.372266 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.372712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.872416 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.872562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.910180 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:32.967065 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:32.970705 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:32.970737 1255403 retry.go:31] will retry after 5.90385417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.372205 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.372281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.372548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:33.859276 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:33.872587 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.872665 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.872976 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:33.873029 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:33.917348 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:33.917388 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.917407 1255403 retry.go:31] will retry after 6.782848909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:34.373058 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.373482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:34.872326 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.872779 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.372469 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.372549 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.372888 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.872424 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:36.372415 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.372487 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.372800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:36.372853 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:36.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.372265 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.372352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.372682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.872370 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.872441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.372314 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.872216 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.872649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:38.872714 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:38.874746 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:38.934878 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:38.934918 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:38.934938 1255403 retry.go:31] will retry after 11.915569958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:39.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.372630 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:39.872679 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.872752 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.873071 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.372962 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.373071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.700947 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:40.758642 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:40.762387 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.762417 1255403 retry.go:31] will retry after 21.268770127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.872611 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.872685 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.872948 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:40.872988 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:41.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.372862 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:41.872999 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.873072 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.873406 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.373262 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.373529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.872690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:43.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:43.372775 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:43.872456 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.872527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.872851 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.372890 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.372962 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.373276 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.873274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:45.373130 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.373198 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.373481 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:45.373531 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:45.872183 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.872255 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.872577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.372267 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.872217 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.872602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.372685 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.872386 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.872467 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:47.872889 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:48.372210 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.372282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:48.872321 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.872397 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.372328 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.372788 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.872573 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.872652 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.872990 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:49.873044 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:50.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.372858 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.850773 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:50.873153 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.873230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.873507 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.907175 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:50.910769 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:50.910800 1255403 retry.go:31] will retry after 16.247326027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:51.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.372590 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:51.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:52.372397 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:52.372848 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:52.872212 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.372690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.872298 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:54.372770 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.372844 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.373109 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:54.373151 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:54.872853 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.873266 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.372618 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.373044 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.872847 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.872929 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.873202 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:56.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.373168 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:56.373526 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:56.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.372334 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.372403 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.872318 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.872429 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.872507 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.872821 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:58.872881 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:59.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:59.872494 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.872570 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.872923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.372776 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.872467 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.872542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:00.873000 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:01.372521 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.372606 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.372957 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:01.872597 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.872682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.032382 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:02.090439 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:02.094499 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.094532 1255403 retry.go:31] will retry after 29.296113507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.372921 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.373278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.873066 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.873160 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.873482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:02.873541 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:03.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.372609 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:03.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.872375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.372804 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.372891 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.373254 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.872959 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.873031 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:05.373086 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:05.373545 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:05.873124 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.873196 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.372203 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.372559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.872726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.159163 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:07.225140 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:07.225182 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.225201 1255403 retry.go:31] will retry after 37.614827372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.372479 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.372553 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.872303 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.872631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:07.872689 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:08.372299 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.372372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.372708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:08.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.872715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.372447 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.372796 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.872827 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.872905 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:09.873268 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:10.373092 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.373486 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:10.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.872225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.872500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.372299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.872706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:12.372374 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.372448 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:12.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:12.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.872684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.372393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.872165 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.872238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.872504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:14.372605 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.372683 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.372964 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:14.373015 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:14.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.873343 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.373252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.373582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.872301 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.872748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.872394 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:16.872757 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:17.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.372831 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:17.872519 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.872615 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.372225 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.872399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.872750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:18.872814 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:19.372283 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.372698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:19.872700 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.872787 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.873056 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.372790 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.372868 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.373159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.873262 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:20.873320 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:21.372905 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.372976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.373258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:21.873023 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.873458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.373158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.373237 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.373596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.872662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:23.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.372703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:23.372759 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:23.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.372755 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.372830 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.373106 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.873063 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.873147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.873454 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:25.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.373530 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:25.373584 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:25.873160 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.873234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.873504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.372211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.372293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.372638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.372392 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.872655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:27.872704 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:28.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.372383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:28.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.872629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.372366 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.372807 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.872715 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.872791 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:29.873212 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:30.372938 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.373018 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.373277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:30.873056 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.873139 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.873488 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.372198 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.372618 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.391812 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:31.449248 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:31.449293 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.449314 1255403 retry.go:31] will retry after 32.643249775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.872710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.872786 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.873055 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:32.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.372938 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.373285 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:32.373340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:32.873121 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.873217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.873546 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.372605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.873008 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.873404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:34.873456 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:35.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.373619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:35.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.872264 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:37.372434 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.372516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:37.372976 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:37.872362 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.872747 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.372444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.372518 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.372848 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.872586 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.873000 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:39.372699 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.372776 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.373049 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:39.373096 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:39.872846 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.373177 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.373253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.373595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.872211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.872651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.372652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.872246 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.872325 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.872669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:41.872726 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:42.372381 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.372711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:42.872265 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.872338 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.872482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:43.872795 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:44.372769 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.372846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.373174 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.841021 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:44.872821 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.872904 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.873176 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.901181 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901219 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901313 1255403 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:47:45.372791 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.372857 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:45.872997 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.873071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.873409 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:45.873479 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:46.372167 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.372668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:46.872419 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.872488 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.872764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.372445 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.372517 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.372854 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.872444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.872552 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.872905 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:48.372585 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.372659 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.372975 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:48.373027 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:48.872695 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.872773 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.873117 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.372676 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.372750 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.872988 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.873056 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.873314 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:50.373106 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.373187 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.373532 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:50.373602 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:50.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.872393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.872755 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.372443 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.372822 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.872619 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.872982 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.372733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.872278 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:52.872651 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:53.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.372739 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:53.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.872729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.372572 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.372934 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.872918 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:54.873327 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:55.373081 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:55.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.372327 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.372740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.872476 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.872974 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:57.372728 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.372818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.373081 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:57.373130 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:57.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.872949 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.873273 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.373071 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.373147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.872181 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.872282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.872778 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.372499 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.372573 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.372928 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.872817 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.872915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.873279 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:59.873340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:00.373167 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.373598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:00.872322 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.872396 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.372325 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.372746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:02.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:02.372982 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:02.872650 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.872731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.873080 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.372870 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.372941 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.373206 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.872565 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.872662 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.872994 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:04.093431 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:48:04.161956 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165693 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165804 1255403 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:48:04.168987 1255403 out.go:179] * Enabled addons: 
	I1217 00:48:04.172517 1255403 addons.go:530] duration metric: took 1m49.814444692s for enable addons: enabled=[]
	I1217 00:48:04.372853 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.372931 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:04.373316 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:04.872985 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.873066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.873348 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.373121 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.373201 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.373539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.873175 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.873252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.873567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.372269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.372345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.372632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.872369 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.872456 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.872833 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:06.872898 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:07.372604 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.373010 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:07.872787 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.872855 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.873139 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.372993 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.373331 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.873147 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.873586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:08.873687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:09.373212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.373288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.373540 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:09.872555 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.872628 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.872945 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:10.372282 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:48:10.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.872369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.872634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:11.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.372756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:11.372815 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:11.872507 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.873053 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.372797 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.372889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.373152 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.872908 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.872978 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.873325 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:13.373184 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.373269 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.373620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:13.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:13.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.872636 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.873084 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.372598 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.372682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.373038 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.872960 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.873043 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.873401 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.373180 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.872199 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:15.872674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:16.372365 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.372441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.372748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:16.872398 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.872472 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.372683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.872389 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.872465 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.872803 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:17.872859 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:18.372488 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.372894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:18.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.372327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.872501 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.872865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:19.872907 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:20.872498 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.872906 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.372218 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.872390 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.872727 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:22.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.372529 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.372835 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:22.372884 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:22.872512 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.872593 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.872860 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.372249 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.372651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.872324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.372825 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.872830 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.873278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:24.873332 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:25.373061 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.373140 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.373479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:25.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.872230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.872535 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.872474 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.872823 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:27.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.372554 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:27.372599 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:27.872258 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.372395 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.872546 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:29.372522 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.372981 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:29.373040 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:29.872933 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.873371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.372154 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.372225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.372485 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.872261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.872617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.372737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.872313 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.872382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.872638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:31.872679 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:32.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.372650 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:32.872346 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.872430 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.872800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.372247 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.372612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.872344 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.872746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:33.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:34.372760 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.372837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.373165 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:34.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.873403 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.372796 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.372872 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.873006 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.873411 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:35.873470 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:36.372142 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.372217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.372567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:36.872286 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.872683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.372367 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.372445 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.372772 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.872328 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.872704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:38.372276 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:38.372765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:38.872447 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.872533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.872877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.372388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.872617 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.872700 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.873011 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:40.372798 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.372870 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.373242 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:40.373311 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:40.873046 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.873122 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.873375 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.373263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.372227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.372622 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.872266 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.872342 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.872665 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:42.872728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:43.372382 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.372462 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:43.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.872545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.372527 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.372936 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.872877 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.872970 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.873320 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:44.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:45.373112 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.373189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.373444 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:45.872210 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.872286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:47.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:47.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:47.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.372340 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.372414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.872370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:49.372492 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.372567 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:49.372966 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:49.872750 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.872825 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.873079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.372959 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.373332 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.873100 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.873177 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.873506 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.372287 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.372545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.872736 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:51.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:52.372475 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.372896 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:52.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.872302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.872605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.372717 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.872428 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.872516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:53.872960 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:54.372871 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.373201 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:54.872863 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.872939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.373131 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.373475 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.873120 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.873191 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.873448 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:55.873490 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:56.372189 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.372265 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.372594 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:56.872333 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.872410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.872761 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.372437 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:58.372941 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:58.872213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.872596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.372693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.872686 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.872770 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.873119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:00.372972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.373055 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:00.373445 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:00.873193 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.873272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.873619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.372350 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.372429 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.872311 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.872383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.372669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.872388 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.872461 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.872782 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:02.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:03.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.372629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:03.872320 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.872407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.872766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.372726 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.372819 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.872850 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.873211 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:04.873257 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:05.373033 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.373116 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.373435 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:05.872186 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.872632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.372288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.372541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:07.372231 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:07.372674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:07.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.872292 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:09.372406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.372481 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:09.372828 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:09.872733 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.872813 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.873142 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.372830 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.372915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.872991 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.873061 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:11.373032 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.373103 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.373422 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:11.373476 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:11.872170 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.872253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.872591 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.872348 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.872733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.372664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.872559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:13.872604 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:14.372593 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.372675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.373023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:14.872826 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.372996 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.373251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.873029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.873114 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.873441 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:15.873499 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:16.372857 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.372939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.373291 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:16.873054 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.873121 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.873381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.373207 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.373279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.373602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.872327 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:18.372423 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.372828 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:18.372879 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:18.872517 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.872599 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.372645 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.372727 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.373052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.872972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.873040 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.873299 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:20.373140 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.373222 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.373562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:20.373608 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:20.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.872744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.372226 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.372710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.872402 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.872479 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:22.872899 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:23.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.372750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:23.872449 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.872526 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.372969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.872889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.872966 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.873311 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:24.873367 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:25.373118 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.373197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.373542 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:25.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.872610 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.872479 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.872558 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:27.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.372614 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:27.372676 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:27.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.872335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.372259 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.872235 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.872639 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.372353 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.372446 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:29.372846 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:29.872795 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.872878 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.372902 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.372971 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.873091 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.873167 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.873517 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.372356 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.372729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.872477 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.872758 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.872805 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:32.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.872415 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.872791 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.372533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.372866 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.872644 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.872953 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.873001 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.372931 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.373361 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.872791 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.872863 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.873115 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.372903 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.372977 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.373328 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.873103 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.873179 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:35.873583 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.372312 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.372627 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.872237 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.372274 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.372348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.372749 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.872850 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.372223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.372290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.372539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.872618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.872954 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.372336 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.372418 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.372807 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:40.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.372246 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.872775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.372333 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.372608 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.872731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:42.872782 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.372494 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.372595 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.372923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.872605 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.872678 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.873001 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.373029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.373105 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.873220 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.873305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.873597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:44.873668 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.372245 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.372641 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.872360 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.872431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.872757 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.372476 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.372555 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.372874 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.872352 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.872442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.872756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.372843 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:47.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.872400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.872738 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.372270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.372525 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.872233 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.872305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.872652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.372364 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.372725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.872595 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.872677 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.872968 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.873012 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.372323 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.872283 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.372362 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.372439 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.872417 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.872793 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.372402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.372781 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.372837 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.872499 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.872576 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.872861 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.372337 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.872497 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.872880 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.372942 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.373033 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.373327 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.373380 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.872873 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.872946 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.873289 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.373144 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.373221 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.373534 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.872319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.872613 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.872664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.872724 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.372361 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.372707 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.872486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.872824 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.372528 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.372963 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.872621 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.873021 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.873080 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.372773 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.372851 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.873119 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.873197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.873526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.872368 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.872754 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:01.372719 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.872316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.872587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.372293 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.372385 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.372341 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.372718 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.372786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.872930 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.373171 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.373565 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.872563 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.872640 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.872400 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.872830 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.872896 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:06.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.372620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.872307 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.372442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.372532 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.372865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.872303 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.872568 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.372604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:08.372650 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.872368 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.372413 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.372486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.372844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.872786 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.873227 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.372862 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.372935 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.373226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:10.373272 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.872953 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.373164 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.373473 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.873198 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.873284 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.372715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.872568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.872993 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:12.873048 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:13.372927 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.873165 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.873240 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.873498 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.372301 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.372407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.372871 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.872754 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.872837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.873190 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.873248 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:15.372993 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.373063 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.873087 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.373215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.373295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.373634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.872583 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.372302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:17.372792 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:17.872468 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.872545 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.872894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.372588 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.372657 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.372315 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.872564 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.872648 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:19.873002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.872700 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.372611 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.372691 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.372973 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.872655 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.872734 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.873073 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.873119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:22.372896 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.372972 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.373287 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.873079 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.373186 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.373280 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.373600 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.872716 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.372595 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.372669 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.372947 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:24.373002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.872867 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.872947 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.373095 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.373171 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.872191 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.872266 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.872527 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.872403 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.872502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:26.872890 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:27.372542 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.372621 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.372944 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.872693 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.872780 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.873112 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.372992 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.873156 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.873541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.873590 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:29.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.872635 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.872959 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.372576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.872271 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.872677 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.372257 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:31.372730 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.372339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.872735 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.372527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.372826 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:33.372874 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.372580 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.372987 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.872892 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.872961 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.873231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.372626 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.372701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.373063 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:35.373119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.872891 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.872974 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.873309 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.373075 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.373152 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.872187 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.872267 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.872563 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.872562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:37.872611 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:38.372261 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.372341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.372684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.872478 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.372517 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.372586 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.372901 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.872823 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.872906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:39.873307 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:40.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.373133 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.373501 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.872204 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.872270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.872526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.372331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.872493 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.372820 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:42.372870 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:42.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.372358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.372675 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.372764 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.373089 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:44.373137 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.873156 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.873500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.372553 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.372450 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.372523 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.372843 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.872247 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.872328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.872612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.872662 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:47.372273 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.872442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.872571 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.872914 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.372316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.872708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.872770 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:49.372262 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.872541 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.372663 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:51.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:51.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.872354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.872701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.372417 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.372845 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.872527 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.872603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.872927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.372268 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:53.372745 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.872425 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.872508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.872834 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.372720 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.372797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.373062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.872869 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.872951 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.373548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:55.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.872601 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.872374 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.872455 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.872814 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.372544 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.872786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.372890 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.872591 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.872679 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.873009 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.372810 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.372884 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.373220 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.872879 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.872969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.873321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:00.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.373766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.372378 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.872219 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.872561 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.372674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:02.372728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.372369 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.372442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.372744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.372647 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.372731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.373140 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:04.373195 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.872948 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.873032 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.873333 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.373154 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.373234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.373560 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.872279 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.872349 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.872425 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.872765 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.872824 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:07.372493 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.372568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.372917 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.872232 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.872304 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.872709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.372217 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:09.372636 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:09.872553 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.872630 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.873023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.372813 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.372913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.873108 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.873408 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.373293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:11.373634 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:11.872336 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.372228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.372302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.372577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.872294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.372401 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.372816 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.872551 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:13.872945 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:14.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.373321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.872852 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.372992 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.373066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.373324 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.873205 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.873281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.873678 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:16.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.372357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.372713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.872413 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.372482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:18.372852 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:18.872514 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.872616 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.872985 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.372573 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.372649 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.372999 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.872895 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.872975 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.873244 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.373182 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.373611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:20.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:20.872380 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.872463 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.872815 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.372512 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.372596 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.872254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.872331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.872674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.372410 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.372485 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.372838 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:22.872700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:23.372313 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.372431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.872457 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.872534 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.872889 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.372864 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.372934 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.373193 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.873012 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.873516 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:24.873575 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:25.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.372801 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.872339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.372699 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.872323 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.372339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.372411 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:27.372716 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:27.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.372837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.872230 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.872576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:29.372758 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:29.872531 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.872638 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.872972 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.372756 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.372841 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.373119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.872942 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.873350 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.373103 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.373183 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:31.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.872222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.872307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.872623 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.372375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.872367 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.872693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.372597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:33.872742 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:34.372680 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.372755 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.373097 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.872882 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.872958 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.873222 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.373010 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.373091 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.373434 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.873113 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.873189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.873528 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.873587 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:36.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.372619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.872253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.872672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.372647 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.872206 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.872274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.872529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:38.372720 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.872325 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.872409 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.872740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.372402 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.372775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.872763 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.872846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.873157 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.372823 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.372906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:40.373285 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.873058 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.873128 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.372149 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.372247 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.372579 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.372607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.872312 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.872392 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.872710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:42.872765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:43.372447 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.372542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.372852 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.872586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.372513 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.372585 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.372919 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.872748 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.872828 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.873215 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:45.372934 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.373011 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.373274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.873496 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.872225 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.872584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.372332 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.372633 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:47.372687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.372256 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.872433 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.872737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.372366 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.372695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:49.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.872713 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.872797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.873197 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.372974 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.373045 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.373414 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.872184 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.872263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.872626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.872387 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:51.872772 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:52.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.372289 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.372503 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.372578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.372841 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:54.372883 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:54.872831 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.873203 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.372953 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.373030 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.373369 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.873134 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.873209 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.873469 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.372169 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.372249 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.372599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.872338 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.872414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:56.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:57.372465 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.372538 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.372790 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.872250 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.872326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.872637 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.372760 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:59.872577 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.873052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.377171 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.377261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.377582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.872322 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.872642 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.872615 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:01.872654 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.372306 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.872696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.372342 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.372415 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.872358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:03.872747 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.872938 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.873008 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.873277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.373195 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.872300 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.872635 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.372224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.372666 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:06.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.872698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.372405 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.372492 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.372840 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.872529 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.872598 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.372280 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:08.372751 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:08.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.872352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.372420 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.872807 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.872889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.373055 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:10.373550 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:10.872220 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.872301 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.872593 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.372352 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.372759 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.872616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.872391 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:12.872789 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:13.372490 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.372574 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.372922 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.872608 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.872675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.872937 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.372532 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.372618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.373079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.872885 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.872973 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.873356 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:14.873435 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:15.372134 1255403 node_ready.go:38] duration metric: took 6m0.000083316s for node "functional-608344" to be "Ready" ...
	I1217 00:52:15.375301 1255403 out.go:203] 
	W1217 00:52:15.378227 1255403 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:52:15.378247 1255403 out.go:285] * 
	W1217 00:52:15.380407 1255403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:52:15.382698 1255403 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376942816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376957372Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376998784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377016729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377035223Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377046678Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377059240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377075864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377091938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377123479Z" level=info msg="Connect containerd service"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377397697Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.378012285Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397433083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397499611Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397530536Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397589827Z" level=info msg="Start recovering state"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437023208Z" level=info msg="Start event monitor"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437213758Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437284429Z" level=info msg="Start streaming server"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437351926Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437408837Z" level=info msg="runtime interface starting up..."
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437465551Z" level=info msg="starting plugins..."
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437527813Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:46:12 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.439319128Z" level=info msg="containerd successfully booted in 0.083914s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:52:17.078027    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:17.078549    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:17.080082    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:17.080513    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:17.082167    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:52:17 up  6:34,  0 user,  load average: 0.08, 0.24, 0.87
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:52:13 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:14 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 17 00:52:14 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:14 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:14 functional-608344 kubelet[8387]: E1217 00:52:14.661334    8387 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:14 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:14 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:15 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 17 00:52:15 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:15 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:15 functional-608344 kubelet[8392]: E1217 00:52:15.471329    8392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:15 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:15 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 17 00:52:16 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 kubelet[8409]: E1217 00:52:16.178653    8409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 17 00:52:16 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 kubelet[8465]: E1217 00:52:16.931548    8465 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (375.261958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-608344 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-608344 get po -A: exit status 1 (57.588644ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-608344 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-608344 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-608344 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (323.369544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/12112432.pem                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image save --daemon kicbase/echo-server:functional-416001 --alsologtostderr                                                           │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /usr/share/ca-certificates/12112432.pem                                                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/test/nested/copy/1211243/hosts                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                      │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp functional-416001:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170430960/001/cp-test.txt                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format short --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt                                                                            │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format yaml --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ cp             │ functional-416001 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                               │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ ssh            │ functional-416001 ssh pgrep buildkitd                                                                                                                   │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ ssh            │ functional-416001 ssh -n functional-416001 sudo cat /tmp/does/not/exist/cp-test.txt                                                                     │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:46:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:46:09.841325 1255403 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:46:09.841557 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841588 1255403 out.go:374] Setting ErrFile to fd 2...
	I1217 00:46:09.841608 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841909 1255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:46:09.842319 1255403 out.go:368] Setting JSON to false
	I1217 00:46:09.843208 1255403 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23320,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:46:09.843304 1255403 start.go:143] virtualization:  
	I1217 00:46:09.846714 1255403 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:46:09.849718 1255403 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:46:09.849800 1255403 notify.go:221] Checking for updates...
	I1217 00:46:09.855303 1255403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:46:09.858207 1255403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:09.860971 1255403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:46:09.863762 1255403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:46:09.866648 1255403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:46:09.869965 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:09.870075 1255403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:46:09.899794 1255403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:46:09.899910 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:09.954202 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:09.945326941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:09.954303 1255403 docker.go:319] overlay module found
	I1217 00:46:09.957332 1255403 out.go:179] * Using the docker driver based on existing profile
	I1217 00:46:09.960126 1255403 start.go:309] selected driver: docker
	I1217 00:46:09.960147 1255403 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:09.960238 1255403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:46:09.960367 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:10.027336 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:10.013273525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:10.027811 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:10.027879 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:10.027939 1255403 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:10.033595 1255403 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:46:10.036654 1255403 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:46:10.039839 1255403 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:46:10.042883 1255403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:46:10.042915 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:10.042969 1255403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:46:10.042980 1255403 cache.go:65] Caching tarball of preloaded images
	I1217 00:46:10.043067 1255403 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:46:10.043077 1255403 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:46:10.043192 1255403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:46:10.064109 1255403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:46:10.064135 1255403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:46:10.064157 1255403 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:46:10.064192 1255403 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:46:10.064257 1255403 start.go:364] duration metric: took 41.379µs to acquireMachinesLock for "functional-608344"
	I1217 00:46:10.064326 1255403 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:46:10.064336 1255403 fix.go:54] fixHost starting: 
	I1217 00:46:10.064635 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:10.082218 1255403 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:46:10.082251 1255403 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:46:10.085538 1255403 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:46:10.085593 1255403 machine.go:94] provisionDockerMachine start ...
	I1217 00:46:10.085773 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.104030 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.104380 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.104395 1255403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:46:10.233303 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.233328 1255403 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:46:10.233404 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.250839 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.251149 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.251164 1255403 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:46:10.396645 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.396749 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.422445 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.422746 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.422762 1255403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:46:10.553926 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:46:10.553954 1255403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:46:10.554002 1255403 ubuntu.go:190] setting up certificates
	I1217 00:46:10.554025 1255403 provision.go:84] configureAuth start
	I1217 00:46:10.554113 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:10.571790 1255403 provision.go:143] copyHostCerts
	I1217 00:46:10.571842 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571886 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:46:10.571897 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571976 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:46:10.572067 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572088 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:46:10.572098 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572127 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:46:10.572172 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572192 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:46:10.572198 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572222 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:46:10.572274 1255403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:46:10.693030 1255403 provision.go:177] copyRemoteCerts
	I1217 00:46:10.693099 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:46:10.693140 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.710526 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.805595 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:46:10.805709 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:46:10.823672 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:46:10.823734 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:46:10.841740 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:46:10.841805 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:46:10.859736 1255403 provision.go:87] duration metric: took 305.682111ms to configureAuth
	I1217 00:46:10.859764 1255403 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:46:10.859948 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:10.859960 1255403 machine.go:97] duration metric: took 774.357768ms to provisionDockerMachine
	I1217 00:46:10.859968 1255403 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:46:10.859979 1255403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:46:10.860038 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:46:10.860081 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.876877 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.973995 1255403 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:46:10.977418 1255403 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:46:10.977440 1255403 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:46:10.977445 1255403 command_runner.go:130] > VERSION_ID="12"
	I1217 00:46:10.977450 1255403 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:46:10.977468 1255403 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:46:10.977472 1255403 command_runner.go:130] > ID=debian
	I1217 00:46:10.977477 1255403 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:46:10.977482 1255403 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:46:10.977488 1255403 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:46:10.977542 1255403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:46:10.977565 1255403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:46:10.977576 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:46:10.977631 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:46:10.977740 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:46:10.977753 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /etc/ssl/certs/12112432.pem
	I1217 00:46:10.977836 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:46:10.977845 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> /etc/test/nested/copy/1211243/hosts
	I1217 00:46:10.977888 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:46:10.985858 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:11.003616 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:46:11.025062 1255403 start.go:296] duration metric: took 165.078815ms for postStartSetup
	I1217 00:46:11.025171 1255403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:46:11.025235 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.042501 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.135058 1255403 command_runner.go:130] > 18%
	I1217 00:46:11.135791 1255403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:46:11.141537 1255403 command_runner.go:130] > 159G
	I1217 00:46:11.142252 1255403 fix.go:56] duration metric: took 1.077909712s for fixHost
	I1217 00:46:11.142316 1255403 start.go:83] releasing machines lock for "functional-608344", held for 1.07800111s
	I1217 00:46:11.142412 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:11.164178 1255403 ssh_runner.go:195] Run: cat /version.json
	I1217 00:46:11.164239 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.164497 1255403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:46:11.164553 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.196976 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.203865 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.389604 1255403 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:46:11.389719 1255403 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:46:11.389906 1255403 ssh_runner.go:195] Run: systemctl --version
	I1217 00:46:11.396314 1255403 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:46:11.396351 1255403 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:46:11.396781 1255403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:46:11.401747 1255403 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:46:11.401791 1255403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:46:11.401850 1255403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:46:11.410012 1255403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:46:11.410035 1255403 start.go:496] detecting cgroup driver to use...
	I1217 00:46:11.410068 1255403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:46:11.410119 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:46:11.427912 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:46:11.441702 1255403 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:46:11.441797 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:46:11.458922 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:46:11.473296 1255403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:46:11.602661 1255403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:46:11.727834 1255403 docker.go:234] disabling docker service ...
	I1217 00:46:11.727932 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:46:11.743775 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:46:11.756449 1255403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:46:11.884208 1255403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:46:12.041744 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:46:12.055323 1255403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:46:12.069025 1255403 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:46:12.070254 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:46:12.080613 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:46:12.090397 1255403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:46:12.090539 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:46:12.100248 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.110370 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:46:12.120135 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.130289 1255403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:46:12.139404 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:46:12.148731 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:46:12.158190 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:46:12.167677 1255403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:46:12.175393 1255403 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:46:12.175487 1255403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:46:12.183394 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.301782 1255403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:46:12.439684 1255403 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:46:12.439765 1255403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:46:12.443346 1255403 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 00:46:12.443371 1255403 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:46:12.443378 1255403 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1217 00:46:12.443385 1255403 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:12.443391 1255403 command_runner.go:130] > Access: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443396 1255403 command_runner.go:130] > Modify: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443401 1255403 command_runner.go:130] > Change: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443405 1255403 command_runner.go:130] >  Birth: -
	I1217 00:46:12.443632 1255403 start.go:564] Will wait 60s for crictl version
	I1217 00:46:12.443703 1255403 ssh_runner.go:195] Run: which crictl
	I1217 00:46:12.446726 1255403 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:46:12.447174 1255403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:46:12.472886 1255403 command_runner.go:130] > Version:  0.1.0
	I1217 00:46:12.473228 1255403 command_runner.go:130] > RuntimeName:  containerd
	I1217 00:46:12.473244 1255403 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 00:46:12.473249 1255403 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:46:12.475292 1255403 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:46:12.475358 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.494552 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.496407 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.517873 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.525827 1255403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:46:12.528776 1255403 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:46:12.544531 1255403 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:46:12.548354 1255403 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:46:12.548680 1255403 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:46:12.548798 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:12.548865 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.573132 1255403 command_runner.go:130] > {
	I1217 00:46:12.573158 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.573163 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573172 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.573185 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573191 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.573195 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573199 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573208 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.573215 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573220 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.573226 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573230 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573234 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573237 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573252 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.573259 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573265 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.573268 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573273 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573284 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.573288 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573292 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.573296 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573300 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573306 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573310 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573323 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.573327 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573333 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.573339 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573350 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573361 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.573365 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573371 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.573376 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.573379 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573385 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573389 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573398 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.573404 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573409 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.573412 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573418 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573426 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.573432 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573437 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.573440 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573446 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573449 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573455 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573459 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573465 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573468 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573475 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.573478 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573484 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.573490 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573494 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573504 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.573508 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573512 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.573521 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573529 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573541 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573546 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573551 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573555 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573560 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573567 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.573574 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573580 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.573583 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573590 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573598 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.573605 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573609 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.573613 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573622 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573625 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573629 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573634 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573660 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573664 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573671 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.573681 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573690 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.573694 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573698 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573710 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.573714 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573719 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.573725 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573729 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573733 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573736 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573743 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.573753 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573759 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.573762 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573765 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573773 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.573776 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573784 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.573790 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573794 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573800 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573804 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573816 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573819 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573822 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573830 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.573836 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573842 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.573845 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573851 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573859 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.573864 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573868 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.573875 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573879 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.573884 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573888 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573894 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.573897 1255403 command_runner.go:130] >     }
	I1217 00:46:12.573900 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.573903 1255403 command_runner.go:130] > }
	I1217 00:46:12.574073 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.574086 1255403 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:46:12.574147 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.596238 1255403 command_runner.go:130] > {
	I1217 00:46:12.596261 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.596266 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596284 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.596300 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596310 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.596314 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596318 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596329 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.596337 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596342 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.596346 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596353 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596356 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596362 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596372 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.596380 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596386 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.596389 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596393 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596402 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.596408 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596413 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.596417 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596422 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596427 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596432 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596442 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.596446 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596451 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.596457 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596464 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596472 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.596477 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596482 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.596486 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.596492 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596500 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596506 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596513 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.596518 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596523 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.596529 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596533 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596540 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.596547 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596551 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.596554 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596569 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596572 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596577 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596585 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596591 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596594 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596622 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.596626 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596638 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.596641 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596645 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596659 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.596662 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596667 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.596673 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596683 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596690 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596694 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596697 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596707 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596710 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596717 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.596726 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596733 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.596739 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596743 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596751 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.596755 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596761 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.596765 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596771 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596775 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596784 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596788 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596791 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596795 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596808 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.596813 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596818 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.596824 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596828 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596836 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.596839 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596847 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.596853 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596857 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596863 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596866 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596873 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.596879 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596885 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.596889 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596900 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596908 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.596914 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596923 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.596927 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596931 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596936 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596940 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596947 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596950 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596953 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596960 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.596967 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596971 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.596975 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596981 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596989 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.596996 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.597000 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.597004 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.597008 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.597013 1255403 command_runner.go:130] >       },
	I1217 00:46:12.597023 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.597027 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.597030 1255403 command_runner.go:130] >     }
	I1217 00:46:12.597033 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.597039 1255403 command_runner.go:130] > }
	I1217 00:46:12.599655 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.599676 1255403 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:46:12.599685 1255403 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:46:12.599841 1255403 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:46:12.599942 1255403 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:46:12.623140 1255403 command_runner.go:130] > {
	I1217 00:46:12.623159 1255403 command_runner.go:130] >   "cniconfig": {
	I1217 00:46:12.623164 1255403 command_runner.go:130] >     "Networks": [
	I1217 00:46:12.623168 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623173 1255403 command_runner.go:130] >         "Config": {
	I1217 00:46:12.623178 1255403 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 00:46:12.623184 1255403 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 00:46:12.623188 1255403 command_runner.go:130] >           "Plugins": [
	I1217 00:46:12.623192 1255403 command_runner.go:130] >             {
	I1217 00:46:12.623196 1255403 command_runner.go:130] >               "Network": {
	I1217 00:46:12.623200 1255403 command_runner.go:130] >                 "ipam": {},
	I1217 00:46:12.623205 1255403 command_runner.go:130] >                 "type": "loopback"
	I1217 00:46:12.623209 1255403 command_runner.go:130] >               },
	I1217 00:46:12.623214 1255403 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 00:46:12.623218 1255403 command_runner.go:130] >             }
	I1217 00:46:12.623221 1255403 command_runner.go:130] >           ],
	I1217 00:46:12.623230 1255403 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 00:46:12.623234 1255403 command_runner.go:130] >         },
	I1217 00:46:12.623239 1255403 command_runner.go:130] >         "IFName": "lo"
	I1217 00:46:12.623243 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623246 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623250 1255403 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 00:46:12.623253 1255403 command_runner.go:130] >     "PluginDirs": [
	I1217 00:46:12.623257 1255403 command_runner.go:130] >       "/opt/cni/bin"
	I1217 00:46:12.623260 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623265 1255403 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 00:46:12.623269 1255403 command_runner.go:130] >     "Prefix": "eth"
	I1217 00:46:12.623272 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623284 1255403 command_runner.go:130] >   "config": {
	I1217 00:46:12.623288 1255403 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 00:46:12.623292 1255403 command_runner.go:130] >       "/etc/cdi",
	I1217 00:46:12.623297 1255403 command_runner.go:130] >       "/var/run/cdi"
	I1217 00:46:12.623300 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623303 1255403 command_runner.go:130] >     "cni": {
	I1217 00:46:12.623306 1255403 command_runner.go:130] >       "binDir": "",
	I1217 00:46:12.623310 1255403 command_runner.go:130] >       "binDirs": [
	I1217 00:46:12.623314 1255403 command_runner.go:130] >         "/opt/cni/bin"
	I1217 00:46:12.623317 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.623322 1255403 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 00:46:12.623325 1255403 command_runner.go:130] >       "confTemplate": "",
	I1217 00:46:12.623329 1255403 command_runner.go:130] >       "ipPref": "",
	I1217 00:46:12.623333 1255403 command_runner.go:130] >       "maxConfNum": 1,
	I1217 00:46:12.623337 1255403 command_runner.go:130] >       "setupSerially": false,
	I1217 00:46:12.623341 1255403 command_runner.go:130] >       "useInternalLoopback": false
	I1217 00:46:12.623344 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623352 1255403 command_runner.go:130] >     "containerd": {
	I1217 00:46:12.623356 1255403 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 00:46:12.623361 1255403 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 00:46:12.623366 1255403 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 00:46:12.623369 1255403 command_runner.go:130] >       "runtimes": {
	I1217 00:46:12.623372 1255403 command_runner.go:130] >         "runc": {
	I1217 00:46:12.623377 1255403 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 00:46:12.623381 1255403 command_runner.go:130] >           "PodAnnotations": null,
	I1217 00:46:12.623386 1255403 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 00:46:12.623391 1255403 command_runner.go:130] >           "cgroupWritable": false,
	I1217 00:46:12.623395 1255403 command_runner.go:130] >           "cniConfDir": "",
	I1217 00:46:12.623399 1255403 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 00:46:12.623403 1255403 command_runner.go:130] >           "io_type": "",
	I1217 00:46:12.623406 1255403 command_runner.go:130] >           "options": {
	I1217 00:46:12.623410 1255403 command_runner.go:130] >             "BinaryName": "",
	I1217 00:46:12.623414 1255403 command_runner.go:130] >             "CriuImagePath": "",
	I1217 00:46:12.623421 1255403 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 00:46:12.623426 1255403 command_runner.go:130] >             "IoGid": 0,
	I1217 00:46:12.623429 1255403 command_runner.go:130] >             "IoUid": 0,
	I1217 00:46:12.623434 1255403 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 00:46:12.623437 1255403 command_runner.go:130] >             "Root": "",
	I1217 00:46:12.623441 1255403 command_runner.go:130] >             "ShimCgroup": "",
	I1217 00:46:12.623445 1255403 command_runner.go:130] >             "SystemdCgroup": false
	I1217 00:46:12.623448 1255403 command_runner.go:130] >           },
	I1217 00:46:12.623453 1255403 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 00:46:12.623459 1255403 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 00:46:12.623463 1255403 command_runner.go:130] >           "runtimePath": "",
	I1217 00:46:12.623468 1255403 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 00:46:12.623473 1255403 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 00:46:12.623476 1255403 command_runner.go:130] >           "snapshotter": ""
	I1217 00:46:12.623479 1255403 command_runner.go:130] >         }
	I1217 00:46:12.623483 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623486 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623495 1255403 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 00:46:12.623500 1255403 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 00:46:12.623507 1255403 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 00:46:12.623511 1255403 command_runner.go:130] >     "disableApparmor": false,
	I1217 00:46:12.623517 1255403 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 00:46:12.623522 1255403 command_runner.go:130] >     "disableProcMount": false,
	I1217 00:46:12.623526 1255403 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 00:46:12.623530 1255403 command_runner.go:130] >     "enableCDI": true,
	I1217 00:46:12.623534 1255403 command_runner.go:130] >     "enableSelinux": false,
	I1217 00:46:12.623538 1255403 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 00:46:12.623542 1255403 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 00:46:12.623547 1255403 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 00:46:12.623551 1255403 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 00:46:12.623555 1255403 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 00:46:12.623559 1255403 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 00:46:12.623563 1255403 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 00:46:12.623571 1255403 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623576 1255403 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 00:46:12.623581 1255403 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623585 1255403 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 00:46:12.623590 1255403 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 00:46:12.623593 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623596 1255403 command_runner.go:130] >   "features": {
	I1217 00:46:12.623601 1255403 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 00:46:12.623603 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623607 1255403 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 00:46:12.623617 1255403 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623626 1255403 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623630 1255403 command_runner.go:130] >   "runtimeHandlers": [
	I1217 00:46:12.623632 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623636 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623640 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623645 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623648 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623651 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623654 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623657 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623662 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623666 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623670 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623673 1255403 command_runner.go:130] >       "name": "runc"
	I1217 00:46:12.623676 1255403 command_runner.go:130] >     }
	I1217 00:46:12.623678 1255403 command_runner.go:130] >   ],
	I1217 00:46:12.623682 1255403 command_runner.go:130] >   "status": {
	I1217 00:46:12.623685 1255403 command_runner.go:130] >     "conditions": [
	I1217 00:46:12.623688 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623692 1255403 command_runner.go:130] >         "message": "",
	I1217 00:46:12.623695 1255403 command_runner.go:130] >         "reason": "",
	I1217 00:46:12.623699 1255403 command_runner.go:130] >         "status": true,
	I1217 00:46:12.623708 1255403 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 00:46:12.623711 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623714 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623721 1255403 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 00:46:12.623726 1255403 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 00:46:12.623729 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623733 1255403 command_runner.go:130] >         "type": "NetworkReady"
	I1217 00:46:12.623737 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623739 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623760 1255403 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 00:46:12.623766 1255403 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 00:46:12.623771 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623776 1255403 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 00:46:12.623779 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623782 1255403 command_runner.go:130] >     ]
	I1217 00:46:12.623784 1255403 command_runner.go:130] >   }
	I1217 00:46:12.623787 1255403 command_runner.go:130] > }
	I1217 00:46:12.625494 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:12.625564 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:12.625600 1255403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:46:12.625679 1255403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:46:12.625821 1255403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:46:12.625903 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:46:12.632727 1255403 command_runner.go:130] > kubeadm
	I1217 00:46:12.632744 1255403 command_runner.go:130] > kubectl
	I1217 00:46:12.632749 1255403 command_runner.go:130] > kubelet
	I1217 00:46:12.633544 1255403 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:46:12.633634 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:46:12.641025 1255403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:46:12.653291 1255403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:46:12.665363 1255403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:46:12.678080 1255403 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:46:12.681502 1255403 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:46:12.681599 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.825775 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:13.622571 1255403 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:46:13.622593 1255403 certs.go:195] generating shared ca certs ...
	I1217 00:46:13.622609 1255403 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:13.622746 1255403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:46:13.622792 1255403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:46:13.622803 1255403 certs.go:257] generating profile certs ...
	I1217 00:46:13.622905 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:46:13.622962 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:46:13.623005 1255403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:46:13.623018 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:46:13.623032 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:46:13.623044 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:46:13.623063 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:46:13.623080 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:46:13.623092 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:46:13.623103 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:46:13.623112 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:46:13.623163 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:46:13.623197 1255403 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:46:13.623208 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:46:13.623239 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:46:13.623268 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:46:13.623296 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:46:13.623339 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:13.623376 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem -> /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.623391 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.623403 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.630954 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:46:13.648792 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:46:13.668204 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:46:13.687794 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:46:13.706777 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:46:13.724521 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:46:13.741552 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:46:13.758610 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:46:13.775595 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:46:13.791737 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:46:13.808409 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:46:13.825079 1255403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:46:13.838395 1255403 ssh_runner.go:195] Run: openssl version
	I1217 00:46:13.844664 1255403 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:46:13.845138 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.852395 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:46:13.860295 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864169 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864290 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864356 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.907286 1255403 command_runner.go:130] > b5213941
	I1217 00:46:13.907795 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:46:13.915373 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.922487 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:46:13.929849 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933445 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933486 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933532 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.974007 1255403 command_runner.go:130] > 51391683
	I1217 00:46:13.974086 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:46:13.981522 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.988760 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:46:13.996178 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.999808 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000049 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000110 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.042220 1255403 command_runner.go:130] > 3ec20f2e
	I1217 00:46:14.042784 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:46:14.050625 1255403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054447 1255403 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054541 1255403 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:46:14.054555 1255403 command_runner.go:130] > Device: 259,1	Inode: 1315986     Links: 1
	I1217 00:46:14.054575 1255403 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:14.054585 1255403 command_runner.go:130] > Access: 2025-12-17 00:42:05.487679973 +0000
	I1217 00:46:14.054596 1255403 command_runner.go:130] > Modify: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054601 1255403 command_runner.go:130] > Change: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054606 1255403 command_runner.go:130] >  Birth: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054705 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:46:14.095552 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.096144 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:46:14.136799 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.137343 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:46:14.178363 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.178447 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:46:14.219183 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.219732 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:46:14.260450 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.260974 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:46:14.301394 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.301907 1255403 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:14.302001 1255403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:46:14.302068 1255403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:46:14.331155 1255403 cri.go:89] found id: ""
	I1217 00:46:14.331262 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:46:14.338208 1255403 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:46:14.338230 1255403 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:46:14.338237 1255403 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:46:14.339135 1255403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:46:14.339150 1255403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:46:14.339201 1255403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:46:14.346631 1255403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:46:14.347092 1255403 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.347204 1255403 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "functional-608344" cluster setting kubeconfig missing "functional-608344" context setting]
	I1217 00:46:14.347476 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.347923 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.348081 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.348643 1255403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:46:14.348662 1255403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:46:14.348668 1255403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:46:14.348676 1255403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:46:14.348680 1255403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:46:14.348726 1255403 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:46:14.348987 1255403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:46:14.356813 1255403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:46:14.356847 1255403 kubeadm.go:602] duration metric: took 17.690718ms to restartPrimaryControlPlane
	I1217 00:46:14.356857 1255403 kubeadm.go:403] duration metric: took 54.958395ms to StartCluster
	I1217 00:46:14.356874 1255403 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.356946 1255403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.357542 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.357832 1255403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:46:14.358027 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:14.358068 1255403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:46:14.358138 1255403 addons.go:70] Setting storage-provisioner=true in profile "functional-608344"
	I1217 00:46:14.358151 1255403 addons.go:239] Setting addon storage-provisioner=true in "functional-608344"
	I1217 00:46:14.358176 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.358595 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.359037 1255403 addons.go:70] Setting default-storageclass=true in profile "functional-608344"
	I1217 00:46:14.359062 1255403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-608344"
	I1217 00:46:14.359347 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.363164 1255403 out.go:179] * Verifying Kubernetes components...
	I1217 00:46:14.370109 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:14.395757 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.395920 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.396204 1255403 addons.go:239] Setting addon default-storageclass=true in "functional-608344"
	I1217 00:46:14.396233 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.396651 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.400122 1255403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:46:14.403014 1255403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.403037 1255403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:46:14.403100 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.432348 1255403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:14.432368 1255403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:46:14.432430 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.436192 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.459745 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.589788 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:14.612125 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.615872 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.372010 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372004 1255403 node_ready.go:35] waiting up to 6m0s for node "functional-608344" to be "Ready" ...
	W1217 00:46:15.372050 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372084 1255403 retry.go:31] will retry after 317.407291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372123 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.372180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.372127 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.372222 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372230 1255403 retry.go:31] will retry after 355.943922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:15.690082 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:15.728590 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.752296 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.756079 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.756112 1255403 retry.go:31] will retry after 490.658856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794006 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.794063 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794090 1255403 retry.go:31] will retry after 355.367864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.872347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.150146 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.223269 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.227406 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.227444 1255403 retry.go:31] will retry after 644.228248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.247645 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.305567 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.309114 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.309147 1255403 retry.go:31] will retry after 583.888251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.372333 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.372417 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872396 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.872762 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872991 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.894225 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.973490 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.973584 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.973617 1255403 retry.go:31] will retry after 498.903187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995507 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.995580 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995609 1255403 retry.go:31] will retry after 1.192163017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.373109 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.373180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.373508 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:17.373561 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:17.473767 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:17.533566 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:17.533674 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.533701 1255403 retry.go:31] will retry after 1.256860103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.873264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.873742 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.188247 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:18.252406 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.256687 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.256719 1255403 retry.go:31] will retry after 1.144811642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.373049 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.373118 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.373371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.790823 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:18.844402 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.847927 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.847962 1255403 retry.go:31] will retry after 2.632795947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.873097 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.873200 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.873479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:19.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.373274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.373606 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:19.373688 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:19.401757 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:19.461824 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:19.461875 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.461894 1255403 retry.go:31] will retry after 1.170153632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.872578 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.872951 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.633061 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:20.706366 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:20.706465 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.706522 1255403 retry.go:31] will retry after 4.067917735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.872741 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.872818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.873104 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.372963 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.373230 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.481608 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:21.538429 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:21.542236 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.542268 1255403 retry.go:31] will retry after 2.033886089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.872800 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.873226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:21.873281 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:22.372860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.372933 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.373246 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:22.872860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.872932 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.873275 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.373010 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.373315 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.576715 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:23.645527 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:23.650062 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.650092 1255403 retry.go:31] will retry after 3.729491652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.872758 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.872840 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.873179 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:24.372935 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.373006 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:24.373329 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:24.774870 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:24.835617 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:24.839228 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.839262 1255403 retry.go:31] will retry after 3.072905013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.872619 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.872702 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.873062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.372995 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.373306 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.873005 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.873083 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.873336 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:26.373211 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.373294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.373696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:26.373764 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:26.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.872371 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.372236 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.372311 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.372626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.380005 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:27.448256 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.448292 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.448311 1255403 retry.go:31] will retry after 5.461633916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.872981 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.873109 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.873476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.912882 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:27.976246 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.976284 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.976302 1255403 retry.go:31] will retry after 5.882789745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:28.373014 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.373087 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.373404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:28.873209 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.873722 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:28.873779 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:29.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.372386 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.372743 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:29.872630 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.872744 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.873074 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.373208 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.872993 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.873065 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.873363 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:31.373163 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.373238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:31.373629 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:31.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.372266 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.372712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.872416 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.872562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.910180 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:32.967065 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:32.970705 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:32.970737 1255403 retry.go:31] will retry after 5.90385417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.372205 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.372281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.372548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:33.859276 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:33.872587 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.872665 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.872976 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:33.873029 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:33.917348 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:33.917388 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.917407 1255403 retry.go:31] will retry after 6.782848909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:34.373058 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.373482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:34.872326 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.872779 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.372469 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.372549 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.372888 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.872424 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:36.372415 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.372487 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.372800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:36.372853 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:36.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.372265 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.372352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.372682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.872370 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.872441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.372314 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.872216 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.872649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:38.872714 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:38.874746 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:38.934878 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:38.934918 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:38.934938 1255403 retry.go:31] will retry after 11.915569958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:39.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.372630 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:39.872679 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.872752 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.873071 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.372962 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.373071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.700947 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:40.758642 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:40.762387 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.762417 1255403 retry.go:31] will retry after 21.268770127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.872611 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.872685 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.872948 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:40.872988 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:41.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.372862 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:41.872999 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.873072 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.873406 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.373262 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.373529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.872690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:43.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:43.372775 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:43.872456 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.872527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.872851 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.372890 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.372962 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.373276 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.873274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:45.373130 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.373198 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.373481 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:45.373531 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:45.872183 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.872255 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.872577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.372267 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.872217 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.872602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.372685 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.872386 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.872467 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:47.872889 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:48.372210 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.372282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:48.872321 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.872397 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.372328 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.372788 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.872573 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.872652 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.872990 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:49.873044 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:50.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.372858 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.850773 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:50.873153 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.873230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.873507 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.907175 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:50.910769 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:50.910800 1255403 retry.go:31] will retry after 16.247326027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:51.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.372590 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:51.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:52.372397 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:52.372848 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:52.872212 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.372690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.872298 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:54.372770 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.372844 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.373109 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:54.373151 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:54.872853 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.873266 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.372618 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.373044 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.872847 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.872929 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.873202 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:56.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.373168 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:56.373526 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:56.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.372334 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.372403 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.872318 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.872429 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.872507 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.872821 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:58.872881 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:59.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:59.872494 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.872570 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.872923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.372776 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.872467 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.872542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:00.873000 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:01.372521 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.372606 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.372957 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:01.872597 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.872682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.032382 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:02.090439 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:02.094499 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.094532 1255403 retry.go:31] will retry after 29.296113507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.372921 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.373278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.873066 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.873160 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.873482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:02.873541 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:03.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.372609 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:03.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.872375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.372804 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.372891 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.373254 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.872959 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.873031 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:05.373086 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:05.373545 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:05.873124 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.873196 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.372203 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.372559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.872726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.159163 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:07.225140 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:07.225182 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.225201 1255403 retry.go:31] will retry after 37.614827372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.372479 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.372553 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.872303 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.872631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:07.872689 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:08.372299 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.372372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.372708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:08.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.872715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.372447 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.372796 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.872827 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.872905 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:09.873268 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:10.373092 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.373486 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:10.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.872225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.872500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.372299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.872706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:12.372374 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.372448 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:12.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:12.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.872684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.372393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.872165 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.872238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.872504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:14.372605 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.372683 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.372964 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:14.373015 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:14.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.873343 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.373252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.373582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.872301 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.872748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.872394 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:16.872757 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:17.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.372831 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:17.872519 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.872615 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.372225 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.872399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.872750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:18.872814 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:19.372283 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.372698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:19.872700 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.872787 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.873056 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.372790 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.372868 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.373159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.873262 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:20.873320 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:21.372905 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.372976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.373258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:21.873023 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.873458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.373158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.373237 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.373596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.872662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:23.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.372703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:23.372759 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:23.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.372755 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.372830 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.373106 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.873063 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.873147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.873454 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:25.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.373530 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:25.373584 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:25.873160 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.873234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.873504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.372211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.372293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.372638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.372392 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.872655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:27.872704 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:28.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.372383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:28.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.872629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.372366 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.372807 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.872715 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.872791 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:29.873212 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:30.372938 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.373018 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.373277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:30.873056 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.873139 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.873488 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.372198 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.372618 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.391812 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:31.449248 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:31.449293 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.449314 1255403 retry.go:31] will retry after 32.643249775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.872710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.872786 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.873055 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:32.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.372938 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.373285 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:32.373340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:32.873121 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.873217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.873546 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.372605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.873008 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.873404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:34.873456 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:35.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.373619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:35.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.872264 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:37.372434 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.372516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:37.372976 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:37.872362 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.872747 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.372444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.372518 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.372848 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.872586 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.873000 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:39.372699 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.372776 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.373049 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:39.373096 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:39.872846 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.373177 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.373253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.373595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.872211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.872651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.372652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.872246 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.872325 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.872669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:41.872726 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:42.372381 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.372711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:42.872265 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.872338 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.872482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:43.872795 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:44.372769 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.372846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.373174 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.841021 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:44.872821 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.872904 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.873176 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.901181 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901219 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901313 1255403 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:47:45.372791 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.372857 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:45.872997 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.873071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.873409 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:45.873479 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:46.372167 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.372668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:46.872419 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.872488 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.872764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.372445 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.372517 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.372854 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.872444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.872552 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.872905 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:48.372585 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.372659 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.372975 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:48.373027 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:48.872695 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.872773 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.873117 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.372676 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.372750 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.872988 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.873056 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.873314 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:50.373106 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.373187 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.373532 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:50.373602 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:50.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.872393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.872755 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.372443 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.372822 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.872619 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.872982 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.372733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.872278 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:52.872651 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:53.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.372739 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:53.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.872729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.372572 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.372934 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.872918 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:54.873327 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:55.373081 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:55.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.372327 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.372740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.872476 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.872974 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:57.372728 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.372818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.373081 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:57.373130 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:57.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.872949 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.873273 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.373071 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.373147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.872181 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.872282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.872778 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.372499 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.372573 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.372928 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.872817 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.872915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.873279 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:59.873340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:00.373167 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.373598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:00.872322 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.872396 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.372325 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.372746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:02.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:02.372982 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:02.872650 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.872731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.873080 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.372870 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.372941 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.373206 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.872565 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.872662 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.872994 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:04.093431 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:48:04.161956 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165693 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165804 1255403 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:48:04.168987 1255403 out.go:179] * Enabled addons: 
	I1217 00:48:04.172517 1255403 addons.go:530] duration metric: took 1m49.814444692s for enable addons: enabled=[]
	I1217 00:48:04.372853 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.372931 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:04.373316 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:04.872985 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.873066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.873348 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.373121 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.373201 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.373539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.873175 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.873252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.873567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.372269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.372345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.372632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.872369 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.872456 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.872833 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:06.872898 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:07.372604 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.373010 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:07.872787 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.872855 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.873139 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.372993 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.373331 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.873147 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.873586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:08.873687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:09.373212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.373288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.373540 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:09.872555 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.872628 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.872945 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:10.372282 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:48:10.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.872369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.872634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:11.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.372756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:11.372815 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:11.872507 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.873053 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.372797 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.372889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.373152 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.872908 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.872978 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.873325 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:13.373184 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.373269 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.373620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:13.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:13.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.872636 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.873084 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.372598 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.372682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.373038 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.872960 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.873043 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.873401 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.373180 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.872199 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:15.872674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:16.372365 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.372441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.372748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:16.872398 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.872472 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.372683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.872389 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.872465 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.872803 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:17.872859 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:18.372488 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.372894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:18.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.372327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.872501 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.872865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:19.872907 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:20.872498 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.872906 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.372218 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.872390 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.872727 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:22.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.372529 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.372835 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:22.372884 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:22.872512 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.872593 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.872860 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.372249 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.372651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.872324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.372825 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.872830 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.873278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:24.873332 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:25.373061 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.373140 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.373479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:25.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.872230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.872535 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.872474 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.872823 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:27.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.372554 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:27.372599 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:27.872258 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.372395 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.872546 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:29.372522 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.372981 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:29.373040 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:29.872933 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.873371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.372154 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.372225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.372485 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.872261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.872617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.372737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.872313 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.872382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.872638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:31.872679 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:32.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.372650 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:32.872346 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.872430 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.872800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.372247 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.372612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.872344 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.872746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:33.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:34.372760 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.372837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.373165 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:34.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.873403 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.372796 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.372872 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.873006 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.873411 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:35.873470 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:36.372142 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.372217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.372567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:36.872286 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.872683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.372367 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.372445 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.372772 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.872328 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.872704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:38.372276 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:38.372765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:38.872447 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.872533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.872877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.372388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.872617 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.872700 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.873011 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:40.372798 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.372870 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.373242 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:40.373311 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:40.873046 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.873122 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.873375 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.373263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.372227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.372622 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.872266 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.872342 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.872665 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:42.872728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:43.372382 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.372462 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:43.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.872545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.372527 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.372936 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.872877 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.872970 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.873320 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:44.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:45.373112 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.373189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.373444 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:45.872210 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.872286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:47.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:47.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:47.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.372340 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.372414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.872370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:49.372492 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.372567 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:49.372966 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:49.872750 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.872825 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.873079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.372959 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.373332 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.873100 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.873177 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.873506 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.372287 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.372545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.872736 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:51.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:52.372475 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.372896 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:52.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.872302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.872605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.372717 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.872428 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.872516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:53.872960 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:54.372871 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.373201 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:54.872863 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.872939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.373131 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.373475 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.873120 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.873191 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.873448 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:55.873490 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:56.372189 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.372265 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.372594 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:56.872333 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.872410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.872761 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.372437 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:58.372941 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:58.872213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.872596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.372693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.872686 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.872770 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.873119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:00.372972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.373055 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:00.373445 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:00.873193 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.873272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.873619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.372350 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.372429 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.872311 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.872383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.372669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.872388 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.872461 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.872782 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:02.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:03.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.372629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:03.872320 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.872407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.872766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.372726 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.372819 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.872850 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.873211 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:04.873257 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:05.373033 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.373116 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.373435 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:05.872186 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.872632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.372288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.372541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:07.372231 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:07.372674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:07.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.872292 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:09.372406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.372481 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:09.372828 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:09.872733 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.872813 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.873142 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.372830 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.372915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.872991 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.873061 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:11.373032 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.373103 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.373422 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:11.373476 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:11.872170 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.872253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.872591 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.872348 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.872733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.372664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.872559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:13.872604 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:14.372593 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.372675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.373023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:14.872826 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.372996 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.373251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.873029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.873114 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.873441 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:15.873499 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:16.372857 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.372939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.373291 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:16.873054 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.873121 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.873381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.373207 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.373279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.373602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.872327 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:18.372423 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.372828 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:18.372879 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:18.872517 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.872599 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.372645 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.372727 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.373052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.872972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.873040 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.873299 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:20.373140 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.373222 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.373562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:20.373608 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:20.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.872744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.372226 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.372710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.872402 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.872479 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:22.872899 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:23.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.372750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:23.872449 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.872526 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.372969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.872889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.872966 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.873311 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:24.873367 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:25.373118 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.373197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.373542 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:25.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.872610 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.872479 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.872558 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:27.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.372614 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:27.372676 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:27.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.872335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.372259 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.872235 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.872639 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.372353 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.372446 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:29.372846 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:29.872795 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.872878 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.372902 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.372971 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.873091 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.873167 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.873517 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.372356 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.372729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.872477 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.872758 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.872805 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:32.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.872415 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.872791 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.372533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.372866 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.872644 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.872953 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.873001 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.372931 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.373361 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.872791 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.872863 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.873115 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.372903 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.372977 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.373328 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.873103 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.873179 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:35.873583 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.372312 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.372627 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.872237 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.372274 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.372348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.372749 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.872850 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.372223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.372290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.372539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.872618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.872954 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.372336 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.372418 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.372807 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:40.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.372246 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.872775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.372333 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.372608 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.872731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:42.872782 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.372494 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.372595 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.372923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.872605 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.872678 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.873001 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.373029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.373105 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.873220 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.873305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.873597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:44.873668 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.372245 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.372641 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.872360 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.872431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.872757 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.372476 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.372555 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.372874 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.872352 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.872442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.872756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.372843 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:47.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.872400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.872738 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.372270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.372525 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.872233 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.872305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.872652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.372364 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.372725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.872595 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.872677 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.872968 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.873012 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.372323 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.872283 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.372362 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.372439 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.872417 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.872793 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.372402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.372781 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.372837 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.872499 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.872576 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.872861 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.372337 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.872497 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.872880 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.372942 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.373033 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.373327 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.373380 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.872873 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.872946 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.873289 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.373144 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.373221 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.373534 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.872319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.872613 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.872664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.872724 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.372361 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.372707 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.872486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.872824 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.372528 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.372963 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.872621 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.873021 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.873080 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.372773 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.372851 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.873119 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.873197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.873526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.872368 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.872754 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:01.372719 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.872316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.872587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.372293 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.372385 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.372341 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.372718 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.372786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.872930 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.373171 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.373565 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.872563 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.872640 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.872400 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.872830 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.872896 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:06.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.372620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.872307 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.372442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.372532 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.372865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.872303 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.872568 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.372604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:08.372650 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.872368 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.372413 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.372486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.372844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.872786 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.873227 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.372862 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.372935 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.373226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:10.373272 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.872953 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.373164 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.373473 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.873198 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.873284 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.372715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.872568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.872993 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:12.873048 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:13.372927 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.873165 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.873240 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.873498 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.372301 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.372407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.372871 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.872754 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.872837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.873190 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.873248 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:15.372993 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.373063 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.873087 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.373215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.373295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.373634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.872583 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.372302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:17.372792 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:17.872468 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.872545 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.872894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.372588 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.372657 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.372315 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.872564 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.872648 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:19.873002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.872700 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.372611 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.372691 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.372973 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.872655 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.872734 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.873073 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.873119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:22.372896 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.372972 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.373287 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.873079 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.373186 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.373280 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.373600 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.872716 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.372595 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.372669 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.372947 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:24.373002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.872867 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.872947 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.373095 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.373171 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.872191 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.872266 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.872527 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.872403 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.872502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:26.872890 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:27.372542 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.372621 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.372944 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.872693 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.872780 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.873112 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.372992 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.873156 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.873541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.873590 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:29.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.872635 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.872959 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.372576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.872271 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.872677 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.372257 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:31.372730 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.372339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.872735 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.372527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.372826 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:33.372874 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.372580 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.372987 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.872892 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.872961 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.873231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.372626 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.372701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.373063 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:35.373119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.872891 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.872974 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.873309 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.373075 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.373152 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.872187 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.872267 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.872563 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.872562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:37.872611 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:38.372261 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.372341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.372684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.872478 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.372517 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.372586 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.372901 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.872823 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.872906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:39.873307 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:40.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.373133 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.373501 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.872204 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.872270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.872526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.372331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.872493 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.372820 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:42.372870 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:42.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.372358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.372675 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.372764 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.373089 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:44.373137 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.873156 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.873500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.372553 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.372450 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.372523 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.372843 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.872247 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.872328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.872612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.872662 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:47.372273 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.872442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.872571 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.872914 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.372316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.872708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.872770 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:49.372262 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.872541 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.372663 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:51.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:51.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.872354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.872701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.372417 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.372845 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.872527 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.872603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.872927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.372268 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:53.372745 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.872425 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.872508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.872834 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.372720 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.372797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.373062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.872869 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.872951 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.373548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:55.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.872601 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.872374 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.872455 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.872814 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.372544 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.872786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.372890 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.872591 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.872679 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.873009 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.372810 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.372884 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.373220 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.872879 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.872969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.873321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:00.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.373766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.372378 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.872219 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.872561 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.372674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:02.372728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.372369 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.372442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.372744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.372647 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.372731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.373140 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:04.373195 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.872948 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.873032 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.873333 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.373154 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.373234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.373560 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.872279 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.872349 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.872425 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.872765 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.872824 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:07.372493 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.372568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.372917 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.872232 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.872304 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.872709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.372217 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:09.372636 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:09.872553 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.872630 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.873023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.372813 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.372913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.873108 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.873408 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.373293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:11.373634 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:11.872336 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.372228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.372302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.372577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.872294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.372401 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.372816 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.872551 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:13.872945 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:14.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.373321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.872852 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.372992 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.373066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.373324 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.873205 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.873281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.873678 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:16.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.372357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.372713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.872413 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.372482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:18.372852 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:18.872514 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.872616 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.872985 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.372573 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.372649 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.372999 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.872895 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.872975 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.873244 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.373182 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.373611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:20.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:20.872380 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.872463 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.872815 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.372512 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.372596 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.872254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.872331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.872674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.372410 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.372485 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.372838 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:22.872700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:23.372313 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.372431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.872457 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.872534 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.872889 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.372864 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.372934 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.373193 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.873012 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.873516 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:24.873575 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:25.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.372801 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.872339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.372699 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.872323 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.372339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.372411 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:27.372716 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:27.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.372837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.872230 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.872576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:29.372758 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:29.872531 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.872638 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.872972 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.372756 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.372841 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.373119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.872942 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.873350 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.373103 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.373183 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:31.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.872222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.872307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.872623 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.372375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.872367 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.872693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.372597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:33.872742 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:34.372680 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.372755 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.373097 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.872882 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.872958 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.873222 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.373010 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.373091 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.373434 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.873113 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.873189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.873528 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.873587 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:36.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.372619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.872253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.872672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.372647 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.872206 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.872274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.872529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:38.372720 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.872325 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.872409 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.872740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.372402 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.372775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.872763 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.872846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.873157 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.372823 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.372906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:40.373285 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.873058 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.873128 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.372149 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.372247 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.372579 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.372607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.872312 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.872392 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.872710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:42.872765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:43.372447 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.372542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.372852 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.872586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.372513 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.372585 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.372919 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.872748 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.872828 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.873215 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:45.372934 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.373011 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.373274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.873496 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.872225 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.872584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.372332 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.372633 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:47.372687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.372256 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.872433 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.872737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.372366 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.372695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:49.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.872713 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.872797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.873197 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.372974 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.373045 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.373414 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.872184 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.872263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.872626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.872387 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:51.872772 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:52.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.372289 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.372503 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.372578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.372841 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:54.372883 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:54.872831 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.873203 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.372953 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.373030 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.373369 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.873134 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.873209 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.873469 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.372169 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.372249 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.372599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.872338 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.872414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:56.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:57.372465 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.372538 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.372790 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.872250 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.872326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.872637 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.372760 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:59.872577 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.873052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.377171 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.377261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.377582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.872322 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.872642 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.872615 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:01.872654 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.372306 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.872696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.372342 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.372415 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.872358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:03.872747 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.872938 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.873008 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.873277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.373195 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.872300 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.872635 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.372224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.372666 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:06.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.872698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.372405 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.372492 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.372840 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.872529 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.872598 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.372280 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:08.372751 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:08.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.872352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.372420 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.872807 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.872889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.373055 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:10.373550 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:10.872220 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.872301 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.872593 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.372352 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.372759 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.872616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.872391 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:12.872789 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:13.372490 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.372574 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.372922 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.872608 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.872675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.872937 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.372532 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.372618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.373079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.872885 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.872973 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.873356 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:14.873435 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:15.372134 1255403 node_ready.go:38] duration metric: took 6m0.000083316s for node "functional-608344" to be "Ready" ...
	I1217 00:52:15.375301 1255403 out.go:203] 
	W1217 00:52:15.378227 1255403 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:52:15.378247 1255403 out.go:285] * 
	W1217 00:52:15.380407 1255403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:52:15.382698 1255403 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376942816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376957372Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.376998784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377016729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377035223Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377046678Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377059240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377075864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377091938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377123479Z" level=info msg="Connect containerd service"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.377397697Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.378012285Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397433083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397499611Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397530536Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.397589827Z" level=info msg="Start recovering state"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437023208Z" level=info msg="Start event monitor"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437213758Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437284429Z" level=info msg="Start streaming server"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437351926Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437408837Z" level=info msg="runtime interface starting up..."
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437465551Z" level=info msg="starting plugins..."
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.437527813Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:46:12 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 00:46:12 functional-608344 containerd[5242]: time="2025-12-17T00:46:12.439319128Z" level=info msg="containerd successfully booted in 0.083914s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:52:19.311144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:19.311781    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:19.313238    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:19.313662    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:19.315339    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:52:19 up  6:34,  0 user,  load average: 0.16, 0.25, 0.87
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 17 00:52:16 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:16 functional-608344 kubelet[8465]: E1217 00:52:16.931548    8465 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:16 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:17 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 17 00:52:17 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:17 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:17 functional-608344 kubelet[8517]: E1217 00:52:17.691832    8517 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:17 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:17 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:18 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 17 00:52:18 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:18 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:18 functional-608344 kubelet[8551]: E1217 00:52:18.416476    8551 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:18 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:18 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:19 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 17 00:52:19 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:19 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:19 functional-608344 kubelet[8605]: E1217 00:52:19.177592    8605 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:19 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:19 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (332.562995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 kubectl -- --context functional-608344 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 kubectl -- --context functional-608344 get pods: exit status 1 (114.300298ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-608344 kubectl -- --context functional-608344 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (315.970967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:latest                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add minikube-local-cache-test:functional-608344                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache delete minikube-local-cache-test:functional-608344                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl images                                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ cache          │ functional-608344 cache reload                                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ kubectl        │ functional-608344 kubectl -- --context functional-608344 get pods                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:46:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:46:09.841325 1255403 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:46:09.841557 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841588 1255403 out.go:374] Setting ErrFile to fd 2...
	I1217 00:46:09.841608 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841909 1255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:46:09.842319 1255403 out.go:368] Setting JSON to false
	I1217 00:46:09.843208 1255403 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23320,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:46:09.843304 1255403 start.go:143] virtualization:  
	I1217 00:46:09.846714 1255403 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:46:09.849718 1255403 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:46:09.849800 1255403 notify.go:221] Checking for updates...
	I1217 00:46:09.855303 1255403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:46:09.858207 1255403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:09.860971 1255403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:46:09.863762 1255403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:46:09.866648 1255403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:46:09.869965 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:09.870075 1255403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:46:09.899794 1255403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:46:09.899910 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:09.954202 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:09.945326941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:09.954303 1255403 docker.go:319] overlay module found
	I1217 00:46:09.957332 1255403 out.go:179] * Using the docker driver based on existing profile
	I1217 00:46:09.960126 1255403 start.go:309] selected driver: docker
	I1217 00:46:09.960147 1255403 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:09.960238 1255403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:46:09.960367 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:10.027336 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:10.013273525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:10.027811 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:10.027879 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:10.027939 1255403 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:10.033595 1255403 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:46:10.036654 1255403 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:46:10.039839 1255403 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:46:10.042883 1255403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:46:10.042915 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:10.042969 1255403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:46:10.042980 1255403 cache.go:65] Caching tarball of preloaded images
	I1217 00:46:10.043067 1255403 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:46:10.043077 1255403 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:46:10.043192 1255403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:46:10.064109 1255403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:46:10.064135 1255403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:46:10.064157 1255403 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:46:10.064192 1255403 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:46:10.064257 1255403 start.go:364] duration metric: took 41.379µs to acquireMachinesLock for "functional-608344"
	I1217 00:46:10.064326 1255403 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:46:10.064336 1255403 fix.go:54] fixHost starting: 
	I1217 00:46:10.064635 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:10.082218 1255403 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:46:10.082251 1255403 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:46:10.085538 1255403 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:46:10.085593 1255403 machine.go:94] provisionDockerMachine start ...
	I1217 00:46:10.085773 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.104030 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.104380 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.104395 1255403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:46:10.233303 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.233328 1255403 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:46:10.233404 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.250839 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.251149 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.251164 1255403 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:46:10.396645 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.396749 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.422445 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.422746 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.422762 1255403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:46:10.553926 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:46:10.553954 1255403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:46:10.554002 1255403 ubuntu.go:190] setting up certificates
	I1217 00:46:10.554025 1255403 provision.go:84] configureAuth start
	I1217 00:46:10.554113 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:10.571790 1255403 provision.go:143] copyHostCerts
	I1217 00:46:10.571842 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571886 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:46:10.571897 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571976 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:46:10.572067 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572088 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:46:10.572098 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572127 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:46:10.572172 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572192 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:46:10.572198 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572222 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:46:10.572274 1255403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:46:10.693030 1255403 provision.go:177] copyRemoteCerts
	I1217 00:46:10.693099 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:46:10.693140 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.710526 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.805595 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:46:10.805709 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:46:10.823672 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:46:10.823734 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:46:10.841740 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:46:10.841805 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:46:10.859736 1255403 provision.go:87] duration metric: took 305.682111ms to configureAuth
	I1217 00:46:10.859764 1255403 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:46:10.859948 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:10.859960 1255403 machine.go:97] duration metric: took 774.357768ms to provisionDockerMachine
	I1217 00:46:10.859968 1255403 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:46:10.859979 1255403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:46:10.860038 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:46:10.860081 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.876877 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.973995 1255403 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:46:10.977418 1255403 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:46:10.977440 1255403 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:46:10.977445 1255403 command_runner.go:130] > VERSION_ID="12"
	I1217 00:46:10.977450 1255403 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:46:10.977468 1255403 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:46:10.977472 1255403 command_runner.go:130] > ID=debian
	I1217 00:46:10.977477 1255403 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:46:10.977482 1255403 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:46:10.977488 1255403 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:46:10.977542 1255403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:46:10.977565 1255403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:46:10.977576 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:46:10.977631 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:46:10.977740 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:46:10.977753 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /etc/ssl/certs/12112432.pem
	I1217 00:46:10.977836 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:46:10.977845 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> /etc/test/nested/copy/1211243/hosts
	I1217 00:46:10.977888 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:46:10.985858 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:11.003616 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:46:11.025062 1255403 start.go:296] duration metric: took 165.078815ms for postStartSetup
	I1217 00:46:11.025171 1255403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:46:11.025235 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.042501 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.135058 1255403 command_runner.go:130] > 18%
	I1217 00:46:11.135791 1255403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:46:11.141537 1255403 command_runner.go:130] > 159G
	I1217 00:46:11.142252 1255403 fix.go:56] duration metric: took 1.077909712s for fixHost
	I1217 00:46:11.142316 1255403 start.go:83] releasing machines lock for "functional-608344", held for 1.07800111s
	I1217 00:46:11.142412 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:11.164178 1255403 ssh_runner.go:195] Run: cat /version.json
	I1217 00:46:11.164239 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.164497 1255403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:46:11.164553 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.196976 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.203865 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.389604 1255403 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:46:11.389719 1255403 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:46:11.389906 1255403 ssh_runner.go:195] Run: systemctl --version
	I1217 00:46:11.396314 1255403 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:46:11.396351 1255403 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:46:11.396781 1255403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:46:11.401747 1255403 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:46:11.401791 1255403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:46:11.401850 1255403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:46:11.410012 1255403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:46:11.410035 1255403 start.go:496] detecting cgroup driver to use...
	I1217 00:46:11.410068 1255403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:46:11.410119 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:46:11.427912 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:46:11.441702 1255403 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:46:11.441797 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:46:11.458922 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:46:11.473296 1255403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:46:11.602661 1255403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:46:11.727834 1255403 docker.go:234] disabling docker service ...
	I1217 00:46:11.727932 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:46:11.743775 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:46:11.756449 1255403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:46:11.884208 1255403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:46:12.041744 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:46:12.055323 1255403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:46:12.069025 1255403 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:46:12.070254 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:46:12.080613 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:46:12.090397 1255403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:46:12.090539 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:46:12.100248 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.110370 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:46:12.120135 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.130289 1255403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:46:12.139404 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:46:12.148731 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:46:12.158190 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:46:12.167677 1255403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:46:12.175393 1255403 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:46:12.175487 1255403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:46:12.183394 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.301782 1255403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:46:12.439684 1255403 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:46:12.439765 1255403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:46:12.443346 1255403 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 00:46:12.443371 1255403 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:46:12.443378 1255403 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1217 00:46:12.443385 1255403 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:12.443391 1255403 command_runner.go:130] > Access: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443396 1255403 command_runner.go:130] > Modify: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443401 1255403 command_runner.go:130] > Change: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443405 1255403 command_runner.go:130] >  Birth: -
	I1217 00:46:12.443632 1255403 start.go:564] Will wait 60s for crictl version
	I1217 00:46:12.443703 1255403 ssh_runner.go:195] Run: which crictl
	I1217 00:46:12.446726 1255403 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:46:12.447174 1255403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:46:12.472886 1255403 command_runner.go:130] > Version:  0.1.0
	I1217 00:46:12.473228 1255403 command_runner.go:130] > RuntimeName:  containerd
	I1217 00:46:12.473244 1255403 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 00:46:12.473249 1255403 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:46:12.475292 1255403 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:46:12.475358 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.494552 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.496407 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.517873 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.525827 1255403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:46:12.528776 1255403 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:46:12.544531 1255403 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:46:12.548354 1255403 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:46:12.548680 1255403 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:46:12.548798 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:12.548865 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.573132 1255403 command_runner.go:130] > {
	I1217 00:46:12.573158 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.573163 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573172 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.573185 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573191 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.573195 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573199 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573208 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.573215 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573220 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.573226 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573230 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573234 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573237 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573252 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.573259 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573265 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.573268 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573273 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573284 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.573288 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573292 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.573296 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573300 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573306 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573310 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573323 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.573327 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573333 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.573339 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573350 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573361 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.573365 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573371 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.573376 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.573379 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573385 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573389 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573398 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.573404 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573409 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.573412 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573418 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573426 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.573432 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573437 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.573440 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573446 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573449 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573455 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573459 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573465 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573468 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573475 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.573478 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573484 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.573490 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573494 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573504 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.573508 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573512 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.573521 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573529 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573541 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573546 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573551 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573555 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573560 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573567 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.573574 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573580 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.573583 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573590 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573598 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.573605 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573609 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.573613 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573622 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573625 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573629 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573634 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573660 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573664 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573671 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.573681 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573690 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.573694 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573698 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573710 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.573714 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573719 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.573725 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573729 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573733 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573736 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573743 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.573753 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573759 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.573762 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573765 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573773 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.573776 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573784 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.573790 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573794 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573800 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573804 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573816 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573819 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573822 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573830 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.573836 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573842 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.573845 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573851 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573859 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.573864 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573868 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.573875 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573879 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.573884 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573888 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573894 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.573897 1255403 command_runner.go:130] >     }
	I1217 00:46:12.573900 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.573903 1255403 command_runner.go:130] > }
	I1217 00:46:12.574073 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.574086 1255403 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:46:12.574147 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.596238 1255403 command_runner.go:130] > {
	I1217 00:46:12.596261 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.596266 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596284 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.596300 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596310 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.596314 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596318 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596329 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.596337 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596342 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.596346 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596353 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596356 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596362 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596372 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.596380 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596386 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.596389 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596393 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596402 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.596408 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596413 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.596417 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596422 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596427 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596432 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596442 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.596446 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596451 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.596457 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596464 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596472 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.596477 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596482 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.596486 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.596492 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596500 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596506 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596513 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.596518 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596523 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.596529 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596533 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596540 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.596547 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596551 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.596554 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596569 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596572 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596577 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596585 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596591 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596594 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596622 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.596626 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596638 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.596641 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596645 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596659 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.596662 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596667 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.596673 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596683 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596690 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596694 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596697 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596707 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596710 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596717 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.596726 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596733 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.596739 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596743 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596751 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.596755 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596761 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.596765 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596771 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596775 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596784 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596788 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596791 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596795 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596808 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.596813 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596818 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.596824 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596828 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596836 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.596839 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596847 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.596853 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596857 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596863 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596866 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596873 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.596879 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596885 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.596889 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596900 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596908 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.596914 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596923 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.596927 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596931 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596936 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596940 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596947 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596950 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596953 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596960 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.596967 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596971 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.596975 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596981 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596989 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.596996 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.597000 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.597004 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.597008 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.597013 1255403 command_runner.go:130] >       },
	I1217 00:46:12.597023 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.597027 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.597030 1255403 command_runner.go:130] >     }
	I1217 00:46:12.597033 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.597039 1255403 command_runner.go:130] > }
	I1217 00:46:12.599655 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.599676 1255403 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:46:12.599685 1255403 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:46:12.599841 1255403 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:46:12.599942 1255403 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:46:12.623140 1255403 command_runner.go:130] > {
	I1217 00:46:12.623159 1255403 command_runner.go:130] >   "cniconfig": {
	I1217 00:46:12.623164 1255403 command_runner.go:130] >     "Networks": [
	I1217 00:46:12.623168 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623173 1255403 command_runner.go:130] >         "Config": {
	I1217 00:46:12.623178 1255403 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 00:46:12.623184 1255403 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 00:46:12.623188 1255403 command_runner.go:130] >           "Plugins": [
	I1217 00:46:12.623192 1255403 command_runner.go:130] >             {
	I1217 00:46:12.623196 1255403 command_runner.go:130] >               "Network": {
	I1217 00:46:12.623200 1255403 command_runner.go:130] >                 "ipam": {},
	I1217 00:46:12.623205 1255403 command_runner.go:130] >                 "type": "loopback"
	I1217 00:46:12.623209 1255403 command_runner.go:130] >               },
	I1217 00:46:12.623214 1255403 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 00:46:12.623218 1255403 command_runner.go:130] >             }
	I1217 00:46:12.623221 1255403 command_runner.go:130] >           ],
	I1217 00:46:12.623230 1255403 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 00:46:12.623234 1255403 command_runner.go:130] >         },
	I1217 00:46:12.623239 1255403 command_runner.go:130] >         "IFName": "lo"
	I1217 00:46:12.623243 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623246 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623250 1255403 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 00:46:12.623253 1255403 command_runner.go:130] >     "PluginDirs": [
	I1217 00:46:12.623257 1255403 command_runner.go:130] >       "/opt/cni/bin"
	I1217 00:46:12.623260 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623265 1255403 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 00:46:12.623269 1255403 command_runner.go:130] >     "Prefix": "eth"
	I1217 00:46:12.623272 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623284 1255403 command_runner.go:130] >   "config": {
	I1217 00:46:12.623288 1255403 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 00:46:12.623292 1255403 command_runner.go:130] >       "/etc/cdi",
	I1217 00:46:12.623297 1255403 command_runner.go:130] >       "/var/run/cdi"
	I1217 00:46:12.623300 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623303 1255403 command_runner.go:130] >     "cni": {
	I1217 00:46:12.623306 1255403 command_runner.go:130] >       "binDir": "",
	I1217 00:46:12.623310 1255403 command_runner.go:130] >       "binDirs": [
	I1217 00:46:12.623314 1255403 command_runner.go:130] >         "/opt/cni/bin"
	I1217 00:46:12.623317 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.623322 1255403 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 00:46:12.623325 1255403 command_runner.go:130] >       "confTemplate": "",
	I1217 00:46:12.623329 1255403 command_runner.go:130] >       "ipPref": "",
	I1217 00:46:12.623333 1255403 command_runner.go:130] >       "maxConfNum": 1,
	I1217 00:46:12.623337 1255403 command_runner.go:130] >       "setupSerially": false,
	I1217 00:46:12.623341 1255403 command_runner.go:130] >       "useInternalLoopback": false
	I1217 00:46:12.623344 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623352 1255403 command_runner.go:130] >     "containerd": {
	I1217 00:46:12.623356 1255403 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 00:46:12.623361 1255403 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 00:46:12.623366 1255403 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 00:46:12.623369 1255403 command_runner.go:130] >       "runtimes": {
	I1217 00:46:12.623372 1255403 command_runner.go:130] >         "runc": {
	I1217 00:46:12.623377 1255403 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 00:46:12.623381 1255403 command_runner.go:130] >           "PodAnnotations": null,
	I1217 00:46:12.623386 1255403 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 00:46:12.623391 1255403 command_runner.go:130] >           "cgroupWritable": false,
	I1217 00:46:12.623395 1255403 command_runner.go:130] >           "cniConfDir": "",
	I1217 00:46:12.623399 1255403 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 00:46:12.623403 1255403 command_runner.go:130] >           "io_type": "",
	I1217 00:46:12.623406 1255403 command_runner.go:130] >           "options": {
	I1217 00:46:12.623410 1255403 command_runner.go:130] >             "BinaryName": "",
	I1217 00:46:12.623414 1255403 command_runner.go:130] >             "CriuImagePath": "",
	I1217 00:46:12.623421 1255403 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 00:46:12.623426 1255403 command_runner.go:130] >             "IoGid": 0,
	I1217 00:46:12.623429 1255403 command_runner.go:130] >             "IoUid": 0,
	I1217 00:46:12.623434 1255403 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 00:46:12.623437 1255403 command_runner.go:130] >             "Root": "",
	I1217 00:46:12.623441 1255403 command_runner.go:130] >             "ShimCgroup": "",
	I1217 00:46:12.623445 1255403 command_runner.go:130] >             "SystemdCgroup": false
	I1217 00:46:12.623448 1255403 command_runner.go:130] >           },
	I1217 00:46:12.623453 1255403 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 00:46:12.623459 1255403 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 00:46:12.623463 1255403 command_runner.go:130] >           "runtimePath": "",
	I1217 00:46:12.623468 1255403 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 00:46:12.623473 1255403 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 00:46:12.623476 1255403 command_runner.go:130] >           "snapshotter": ""
	I1217 00:46:12.623479 1255403 command_runner.go:130] >         }
	I1217 00:46:12.623483 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623486 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623495 1255403 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 00:46:12.623500 1255403 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 00:46:12.623507 1255403 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 00:46:12.623511 1255403 command_runner.go:130] >     "disableApparmor": false,
	I1217 00:46:12.623517 1255403 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 00:46:12.623522 1255403 command_runner.go:130] >     "disableProcMount": false,
	I1217 00:46:12.623526 1255403 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 00:46:12.623530 1255403 command_runner.go:130] >     "enableCDI": true,
	I1217 00:46:12.623534 1255403 command_runner.go:130] >     "enableSelinux": false,
	I1217 00:46:12.623538 1255403 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 00:46:12.623542 1255403 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 00:46:12.623547 1255403 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 00:46:12.623551 1255403 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 00:46:12.623555 1255403 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 00:46:12.623559 1255403 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 00:46:12.623563 1255403 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 00:46:12.623571 1255403 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623576 1255403 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 00:46:12.623581 1255403 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623585 1255403 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 00:46:12.623590 1255403 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 00:46:12.623593 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623596 1255403 command_runner.go:130] >   "features": {
	I1217 00:46:12.623601 1255403 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 00:46:12.623603 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623607 1255403 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 00:46:12.623617 1255403 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623626 1255403 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623630 1255403 command_runner.go:130] >   "runtimeHandlers": [
	I1217 00:46:12.623632 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623636 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623640 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623645 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623648 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623651 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623654 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623657 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623662 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623666 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623670 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623673 1255403 command_runner.go:130] >       "name": "runc"
	I1217 00:46:12.623676 1255403 command_runner.go:130] >     }
	I1217 00:46:12.623678 1255403 command_runner.go:130] >   ],
	I1217 00:46:12.623682 1255403 command_runner.go:130] >   "status": {
	I1217 00:46:12.623685 1255403 command_runner.go:130] >     "conditions": [
	I1217 00:46:12.623688 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623692 1255403 command_runner.go:130] >         "message": "",
	I1217 00:46:12.623695 1255403 command_runner.go:130] >         "reason": "",
	I1217 00:46:12.623699 1255403 command_runner.go:130] >         "status": true,
	I1217 00:46:12.623708 1255403 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 00:46:12.623711 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623714 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623721 1255403 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 00:46:12.623726 1255403 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 00:46:12.623729 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623733 1255403 command_runner.go:130] >         "type": "NetworkReady"
	I1217 00:46:12.623737 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623739 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623760 1255403 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 00:46:12.623766 1255403 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 00:46:12.623771 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623776 1255403 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 00:46:12.623779 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623782 1255403 command_runner.go:130] >     ]
	I1217 00:46:12.623784 1255403 command_runner.go:130] >   }
	I1217 00:46:12.623787 1255403 command_runner.go:130] > }
	I1217 00:46:12.625494 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:12.625564 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:12.625600 1255403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:46:12.625679 1255403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:46:12.625821 1255403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:46:12.625903 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:46:12.632727 1255403 command_runner.go:130] > kubeadm
	I1217 00:46:12.632744 1255403 command_runner.go:130] > kubectl
	I1217 00:46:12.632749 1255403 command_runner.go:130] > kubelet
	I1217 00:46:12.633544 1255403 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:46:12.633634 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:46:12.641025 1255403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:46:12.653291 1255403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:46:12.665363 1255403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:46:12.678080 1255403 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:46:12.681502 1255403 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:46:12.681599 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.825775 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:13.622571 1255403 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:46:13.622593 1255403 certs.go:195] generating shared ca certs ...
	I1217 00:46:13.622609 1255403 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:13.622746 1255403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:46:13.622792 1255403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:46:13.622803 1255403 certs.go:257] generating profile certs ...
	I1217 00:46:13.622905 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:46:13.622962 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:46:13.623005 1255403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:46:13.623018 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:46:13.623032 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:46:13.623044 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:46:13.623063 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:46:13.623080 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:46:13.623092 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:46:13.623103 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:46:13.623112 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:46:13.623163 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:46:13.623197 1255403 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:46:13.623208 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:46:13.623239 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:46:13.623268 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:46:13.623296 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:46:13.623339 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:13.623376 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem -> /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.623391 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.623403 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.630954 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:46:13.648792 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:46:13.668204 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:46:13.687794 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:46:13.706777 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:46:13.724521 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:46:13.741552 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:46:13.758610 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:46:13.775595 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:46:13.791737 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:46:13.808409 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:46:13.825079 1255403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:46:13.838395 1255403 ssh_runner.go:195] Run: openssl version
	I1217 00:46:13.844664 1255403 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:46:13.845138 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.852395 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:46:13.860295 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864169 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864290 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864356 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.907286 1255403 command_runner.go:130] > b5213941
	I1217 00:46:13.907795 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:46:13.915373 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.922487 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:46:13.929849 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933445 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933486 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933532 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.974007 1255403 command_runner.go:130] > 51391683
	I1217 00:46:13.974086 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:46:13.981522 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.988760 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:46:13.996178 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.999808 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000049 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000110 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.042220 1255403 command_runner.go:130] > 3ec20f2e
	I1217 00:46:14.042784 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:46:14.050625 1255403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054447 1255403 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054541 1255403 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:46:14.054555 1255403 command_runner.go:130] > Device: 259,1	Inode: 1315986     Links: 1
	I1217 00:46:14.054575 1255403 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:14.054585 1255403 command_runner.go:130] > Access: 2025-12-17 00:42:05.487679973 +0000
	I1217 00:46:14.054596 1255403 command_runner.go:130] > Modify: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054601 1255403 command_runner.go:130] > Change: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054606 1255403 command_runner.go:130] >  Birth: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054705 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:46:14.095552 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.096144 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:46:14.136799 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.137343 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:46:14.178363 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.178447 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:46:14.219183 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.219732 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:46:14.260450 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.260974 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:46:14.301394 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.301907 1255403 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:14.302001 1255403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:46:14.302068 1255403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:46:14.331155 1255403 cri.go:89] found id: ""
	I1217 00:46:14.331262 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:46:14.338208 1255403 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:46:14.338230 1255403 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:46:14.338237 1255403 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:46:14.339135 1255403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:46:14.339150 1255403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:46:14.339201 1255403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:46:14.346631 1255403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:46:14.347092 1255403 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.347204 1255403 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "functional-608344" cluster setting kubeconfig missing "functional-608344" context setting]
	I1217 00:46:14.347476 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.347923 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.348081 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.348643 1255403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:46:14.348662 1255403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:46:14.348668 1255403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:46:14.348676 1255403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:46:14.348680 1255403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:46:14.348726 1255403 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:46:14.348987 1255403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:46:14.356813 1255403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:46:14.356847 1255403 kubeadm.go:602] duration metric: took 17.690718ms to restartPrimaryControlPlane
	I1217 00:46:14.356857 1255403 kubeadm.go:403] duration metric: took 54.958395ms to StartCluster
	I1217 00:46:14.356874 1255403 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.356946 1255403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.357542 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.357832 1255403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:46:14.358027 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:14.358068 1255403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:46:14.358138 1255403 addons.go:70] Setting storage-provisioner=true in profile "functional-608344"
	I1217 00:46:14.358151 1255403 addons.go:239] Setting addon storage-provisioner=true in "functional-608344"
	I1217 00:46:14.358176 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.358595 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.359037 1255403 addons.go:70] Setting default-storageclass=true in profile "functional-608344"
	I1217 00:46:14.359062 1255403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-608344"
	I1217 00:46:14.359347 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.363164 1255403 out.go:179] * Verifying Kubernetes components...
	I1217 00:46:14.370109 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:14.395757 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.395920 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.396204 1255403 addons.go:239] Setting addon default-storageclass=true in "functional-608344"
	I1217 00:46:14.396233 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.396651 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.400122 1255403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:46:14.403014 1255403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.403037 1255403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:46:14.403100 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.432348 1255403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:14.432368 1255403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:46:14.432430 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.436192 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.459745 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.589788 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:14.612125 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.615872 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.372010 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372004 1255403 node_ready.go:35] waiting up to 6m0s for node "functional-608344" to be "Ready" ...
	W1217 00:46:15.372050 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372084 1255403 retry.go:31] will retry after 317.407291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372123 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.372180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.372127 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.372222 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372230 1255403 retry.go:31] will retry after 355.943922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:15.690082 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:15.728590 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.752296 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.756079 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.756112 1255403 retry.go:31] will retry after 490.658856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794006 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.794063 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794090 1255403 retry.go:31] will retry after 355.367864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.872347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.150146 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.223269 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.227406 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.227444 1255403 retry.go:31] will retry after 644.228248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.247645 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.305567 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.309114 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.309147 1255403 retry.go:31] will retry after 583.888251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.372333 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.372417 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872396 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.872762 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872991 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.894225 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.973490 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.973584 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.973617 1255403 retry.go:31] will retry after 498.903187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995507 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.995580 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995609 1255403 retry.go:31] will retry after 1.192163017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.373109 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.373180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.373508 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:17.373561 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:17.473767 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:17.533566 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:17.533674 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.533701 1255403 retry.go:31] will retry after 1.256860103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.873264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.873742 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.188247 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:18.252406 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.256687 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.256719 1255403 retry.go:31] will retry after 1.144811642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.373049 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.373118 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.373371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.790823 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:18.844402 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.847927 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.847962 1255403 retry.go:31] will retry after 2.632795947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.873097 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.873200 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.873479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:19.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.373274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.373606 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:19.373688 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:19.401757 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:19.461824 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:19.461875 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.461894 1255403 retry.go:31] will retry after 1.170153632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.872578 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.872951 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.633061 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:20.706366 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:20.706465 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.706522 1255403 retry.go:31] will retry after 4.067917735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.872741 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.872818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.873104 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.372963 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.373230 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.481608 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:21.538429 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:21.542236 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.542268 1255403 retry.go:31] will retry after 2.033886089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.872800 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.873226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:21.873281 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:22.372860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.372933 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.373246 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:22.872860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.872932 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.873275 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.373010 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.373315 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.576715 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:23.645527 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:23.650062 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.650092 1255403 retry.go:31] will retry after 3.729491652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.872758 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.872840 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.873179 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:24.372935 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.373006 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:24.373329 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:24.774870 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:24.835617 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:24.839228 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.839262 1255403 retry.go:31] will retry after 3.072905013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.872619 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.872702 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.873062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.372995 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.373306 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.873005 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.873083 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.873336 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:26.373211 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.373294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.373696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:26.373764 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:26.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.872371 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.372236 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.372311 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.372626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.380005 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:27.448256 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.448292 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.448311 1255403 retry.go:31] will retry after 5.461633916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.872981 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.873109 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.873476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.912882 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:27.976246 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.976284 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.976302 1255403 retry.go:31] will retry after 5.882789745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:28.373014 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.373087 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.373404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:28.873209 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.873722 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:28.873779 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:29.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.372386 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.372743 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:29.872630 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.872744 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.873074 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.373208 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.872993 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.873065 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.873363 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:31.373163 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.373238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:31.373629 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:31.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.372266 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.372712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.872416 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.872562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.910180 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:32.967065 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:32.970705 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:32.970737 1255403 retry.go:31] will retry after 5.90385417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.372205 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.372281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.372548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:33.859276 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:33.872587 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.872665 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.872976 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:33.873029 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:33.917348 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:33.917388 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.917407 1255403 retry.go:31] will retry after 6.782848909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:34.373058 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.373482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:34.872326 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.872779 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.372469 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.372549 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.372888 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.872424 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:36.372415 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.372487 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.372800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:36.372853 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:36.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.372265 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.372352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.372682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.872370 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.872441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.372314 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.872216 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.872649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:38.872714 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:38.874746 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:38.934878 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:38.934918 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:38.934938 1255403 retry.go:31] will retry after 11.915569958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:39.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.372630 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:39.872679 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.872752 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.873071 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.372962 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.373071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.700947 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:40.758642 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:40.762387 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.762417 1255403 retry.go:31] will retry after 21.268770127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.872611 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.872685 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.872948 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:40.872988 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:41.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.372862 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:41.872999 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.873072 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.873406 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.373262 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.373529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.872690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:43.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:43.372775 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:43.872456 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.872527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.872851 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.372890 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.372962 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.373276 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.873274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:45.373130 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.373198 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.373481 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:45.373531 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:45.872183 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.872255 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.872577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.372267 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.872217 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.872602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.372685 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.872386 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.872467 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:47.872889 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:48.372210 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.372282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:48.872321 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.872397 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.372328 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.372788 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.872573 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.872652 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.872990 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:49.873044 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:50.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.372858 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.850773 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:50.873153 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.873230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.873507 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.907175 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:50.910769 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:50.910800 1255403 retry.go:31] will retry after 16.247326027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:51.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.372590 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:51.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:52.372397 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:52.372848 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:52.872212 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.372690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.872298 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:54.372770 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.372844 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.373109 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:54.373151 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:54.872853 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.873266 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.372618 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.373044 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.872847 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.872929 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.873202 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:56.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.373168 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:56.373526 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:56.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.372334 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.372403 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.872318 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.872429 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.872507 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.872821 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:58.872881 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:59.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:59.872494 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.872570 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.872923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.372776 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.872467 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.872542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:00.873000 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:01.372521 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.372606 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.372957 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:01.872597 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.872682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.032382 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:02.090439 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:02.094499 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.094532 1255403 retry.go:31] will retry after 29.296113507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.372921 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.373278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.873066 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.873160 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.873482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:02.873541 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:03.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.372609 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:03.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.872375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.372804 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.372891 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.373254 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.872959 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.873031 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:05.373086 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:05.373545 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:05.873124 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.873196 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.372203 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.372559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.872726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.159163 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:07.225140 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:07.225182 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.225201 1255403 retry.go:31] will retry after 37.614827372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.372479 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.372553 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.872303 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.872631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:07.872689 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:08.372299 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.372372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.372708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:08.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.872715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.372447 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.372796 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.872827 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.872905 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:09.873268 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:10.373092 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.373486 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:10.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.872225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.872500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.372299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.872706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:12.372374 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.372448 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:12.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:12.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.872684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.372393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.872165 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.872238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.872504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:14.372605 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.372683 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.372964 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:14.373015 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:14.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.873343 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.373252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.373582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.872301 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.872748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.872394 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:16.872757 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:17.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.372831 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:17.872519 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.872615 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.372225 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.872399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.872750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:18.872814 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:19.372283 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.372698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:19.872700 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.872787 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.873056 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.372790 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.372868 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.373159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.873262 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:20.873320 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:21.372905 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.372976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.373258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:21.873023 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.873458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.373158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.373237 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.373596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.872662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:23.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.372703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:23.372759 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:23.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.372755 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.372830 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.373106 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.873063 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.873147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.873454 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:25.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.373530 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:25.373584 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:25.873160 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.873234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.873504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.372211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.372293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.372638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.372392 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.872655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:27.872704 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:28.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.372383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:28.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.872629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.372366 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.372807 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.872715 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.872791 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:29.873212 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:30.372938 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.373018 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.373277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:30.873056 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.873139 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.873488 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.372198 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.372618 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.391812 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:31.449248 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:31.449293 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.449314 1255403 retry.go:31] will retry after 32.643249775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.872710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.872786 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.873055 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:32.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.372938 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.373285 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:32.373340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:32.873121 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.873217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.873546 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.372605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.873008 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.873404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:34.873456 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:35.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.373619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:35.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.872264 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:37.372434 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.372516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:37.372976 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:37.872362 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.872747 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.372444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.372518 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.372848 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.872586 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.873000 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:39.372699 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.372776 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.373049 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:39.373096 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:39.872846 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.373177 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.373253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.373595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.872211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.872651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.372652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.872246 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.872325 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.872669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:41.872726 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:42.372381 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.372711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:42.872265 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.872338 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.872482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:43.872795 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:44.372769 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.372846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.373174 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.841021 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:44.872821 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.872904 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.873176 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.901181 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901219 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901313 1255403 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:47:45.372791 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.372857 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:45.872997 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.873071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.873409 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:45.873479 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:46.372167 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.372668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:46.872419 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.872488 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.872764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.372445 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.372517 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.372854 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.872444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.872552 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.872905 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:48.372585 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.372659 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.372975 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:48.373027 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:48.872695 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.872773 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.873117 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.372676 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.372750 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.872988 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.873056 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.873314 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:50.373106 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.373187 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.373532 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:50.373602 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:50.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.872393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.872755 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.372443 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.372822 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.872619 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.872982 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.372733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.872278 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:52.872651 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:53.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.372739 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:53.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.872729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.372572 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.372934 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.872918 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:54.873327 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:55.373081 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:55.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.372327 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.372740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.872476 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.872974 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:57.372728 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.372818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.373081 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:57.373130 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:57.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.872949 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.873273 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.373071 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.373147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.872181 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.872282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.872778 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.372499 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.372573 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.372928 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.872817 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.872915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.873279 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:59.873340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:00.373167 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.373598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:00.872322 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.872396 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.372325 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.372746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:02.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:02.372982 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:02.872650 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.872731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.873080 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.372870 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.372941 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.373206 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.872565 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.872662 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.872994 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:04.093431 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:48:04.161956 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165693 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165804 1255403 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:48:04.168987 1255403 out.go:179] * Enabled addons: 
	I1217 00:48:04.172517 1255403 addons.go:530] duration metric: took 1m49.814444692s for enable addons: enabled=[]
	I1217 00:48:04.372853 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.372931 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:04.373316 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:04.872985 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.873066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.873348 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.373121 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.373201 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.373539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.873175 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.873252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.873567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.372269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.372345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.372632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.872369 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.872456 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.872833 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:06.872898 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:07.372604 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.373010 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:07.872787 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.872855 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.873139 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.372993 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.373331 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.873147 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.873586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:08.873687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:09.373212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.373288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.373540 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:09.872555 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.872628 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.872945 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:10.372282 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:48:10.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.872369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.872634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:11.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.372756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:11.372815 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:11.872507 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.873053 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.372797 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.372889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.373152 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.872908 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.872978 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.873325 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:13.373184 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.373269 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.373620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:13.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:13.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.872636 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.873084 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.372598 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.372682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.373038 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.872960 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.873043 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.873401 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.373180 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.872199 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:15.872674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:16.372365 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.372441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.372748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:16.872398 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.872472 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.372683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.872389 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.872465 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.872803 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:17.872859 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:18.372488 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.372894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:18.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.372327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.872501 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.872865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:19.872907 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:20.872498 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.872906 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.372218 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.872390 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.872727 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:22.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.372529 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.372835 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:22.372884 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:22.872512 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.872593 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.872860 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.372249 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.372651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.872324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.372825 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.872830 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.873278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:24.873332 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:25.373061 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.373140 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.373479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:25.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.872230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.872535 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.872474 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.872823 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:27.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.372554 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:27.372599 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:27.872258 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.372395 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.872546 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:29.372522 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.372981 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:29.373040 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:29.872933 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.873371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.372154 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.372225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.372485 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.872261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.872617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.372737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.872313 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.872382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.872638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:31.872679 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:32.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.372650 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:32.872346 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.872430 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.872800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.372247 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.372612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.872344 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.872746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:33.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:34.372760 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.372837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.373165 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:34.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.873403 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.372796 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.372872 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.873006 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.873411 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:35.873470 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:36.372142 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.372217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.372567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:36.872286 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.872683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.372367 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.372445 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.372772 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.872328 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.872704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:38.372276 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:38.372765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:38.872447 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.872533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.872877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.372388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.872617 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.872700 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.873011 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:40.372798 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.372870 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.373242 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:40.373311 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:40.873046 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.873122 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.873375 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.373263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.372227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.372622 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.872266 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.872342 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.872665 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:42.872728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:43.372382 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.372462 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:43.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.872545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.372527 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.372936 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.872877 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.872970 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.873320 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:44.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:45.373112 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.373189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.373444 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:45.872210 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.872286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:47.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:47.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:47.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.372340 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.372414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.872370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:49.372492 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.372567 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:49.372966 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:49.872750 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.872825 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.873079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.372959 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.373332 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.873100 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.873177 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.873506 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.372287 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.372545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.872736 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:51.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:52.372475 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.372896 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:52.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.872302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.872605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.372717 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.872428 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.872516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:53.872960 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:54.372871 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.373201 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:54.872863 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.872939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.373131 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.373475 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.873120 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.873191 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.873448 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:55.873490 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:56.372189 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.372265 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.372594 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:56.872333 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.872410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.872761 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.372437 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:58.372941 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:58.872213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.872596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.372693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.872686 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.872770 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.873119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:00.372972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.373055 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:00.373445 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:00.873193 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.873272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.873619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.372350 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.372429 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.872311 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.872383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.372669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.872388 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.872461 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.872782 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:02.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:03.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.372629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:03.872320 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.872407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.872766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.372726 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.372819 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.872850 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.873211 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:04.873257 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:05.373033 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.373116 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.373435 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:05.872186 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.872632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.372288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.372541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:07.372231 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:07.372674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:07.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.872292 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:09.372406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.372481 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:09.372828 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:09.872733 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.872813 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.873142 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.372830 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.372915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.872991 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.873061 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:11.373032 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.373103 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.373422 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:11.373476 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:11.872170 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.872253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.872591 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.872348 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.872733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.372664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.872559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:13.872604 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:14.372593 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.372675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.373023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:14.872826 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.372996 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.373251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.873029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.873114 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.873441 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:15.873499 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:16.372857 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.372939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.373291 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:16.873054 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.873121 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.873381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.373207 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.373279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.373602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.872327 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:18.372423 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.372828 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:18.372879 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:18.872517 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.872599 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.372645 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.372727 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.373052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.872972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.873040 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.873299 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:20.373140 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.373222 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.373562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:20.373608 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:20.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.872744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.372226 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.372710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.872402 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.872479 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:22.872899 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:23.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.372750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:23.872449 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.872526 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.372969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.872889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.872966 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.873311 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:24.873367 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:25.373118 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.373197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.373542 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:25.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.872610 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.872479 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.872558 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:27.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.372614 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:27.372676 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:27.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.872335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.372259 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.872235 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.872639 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.372353 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.372446 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:29.372846 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:29.872795 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.872878 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.372902 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.372971 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.873091 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.873167 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.873517 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.372356 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.372729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.872477 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.872758 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.872805 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:32.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.872415 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.872791 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.372533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.372866 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.872644 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.872953 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.873001 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.372931 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.373361 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.872791 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.872863 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.873115 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.372903 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.372977 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.373328 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.873103 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.873179 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:35.873583 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.372312 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.372627 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.872237 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.372274 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.372348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.372749 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.872850 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.372223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.372290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.372539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.872618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.872954 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.372336 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.372418 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.372807 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:40.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.372246 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.872775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.372333 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.372608 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.872731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:42.872782 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.372494 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.372595 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.372923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.872605 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.872678 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.873001 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.373029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.373105 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.873220 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.873305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.873597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:44.873668 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.372245 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.372641 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.872360 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.872431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.872757 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.372476 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.372555 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.372874 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.872352 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.872442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.872756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.372843 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:47.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.872400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.872738 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.372270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.372525 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.872233 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.872305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.872652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.372364 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.372725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.872595 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.872677 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.872968 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.873012 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.372323 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.872283 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.372362 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.372439 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.872417 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.872793 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.372402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.372781 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.372837 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.872499 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.872576 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.872861 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.372337 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.872497 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.872880 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.372942 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.373033 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.373327 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.373380 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.872873 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.872946 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.873289 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.373144 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.373221 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.373534 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.872319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.872613 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.872664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.872724 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.372361 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.372707 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.872486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.872824 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.372528 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.372963 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.872621 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.873021 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.873080 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.372773 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.372851 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.873119 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.873197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.873526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.872368 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.872754 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:01.372719 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.872316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.872587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.372293 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.372385 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.372341 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.372718 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.372786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.872930 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.373171 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.373565 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.872563 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.872640 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.872400 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.872830 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.872896 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:06.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.372620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.872307 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.372442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.372532 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.372865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.872303 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.872568 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.372604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:08.372650 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.872368 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.372413 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.372486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.372844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.872786 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.873227 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.372862 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.372935 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.373226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:10.373272 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.872953 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.373164 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.373473 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.873198 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.873284 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.372715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.872568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.872993 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:12.873048 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:13.372927 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.873165 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.873240 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.873498 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.372301 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.372407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.372871 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.872754 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.872837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.873190 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.873248 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:15.372993 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.373063 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.873087 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.373215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.373295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.373634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.872583 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.372302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:17.372792 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:17.872468 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.872545 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.872894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.372588 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.372657 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.372315 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.872564 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.872648 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:19.873002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.872700 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.372611 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.372691 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.372973 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.872655 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.872734 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.873073 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.873119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:22.372896 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.372972 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.373287 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.873079 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.373186 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.373280 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.373600 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.872716 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.372595 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.372669 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.372947 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:24.373002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.872867 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.872947 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.373095 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.373171 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.872191 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.872266 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.872527 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.872403 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.872502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:26.872890 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:27.372542 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.372621 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.372944 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.872693 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.872780 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.873112 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.372992 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.873156 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.873541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.873590 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:29.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.872635 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.872959 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.372576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.872271 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.872677 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.372257 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:31.372730 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.372339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.872735 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.372527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.372826 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:33.372874 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.372580 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.372987 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.872892 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.872961 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.873231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.372626 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.372701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.373063 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:35.373119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.872891 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.872974 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.873309 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.373075 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.373152 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.872187 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.872267 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.872563 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.872562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:37.872611 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:38.372261 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.372341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.372684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.872478 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.372517 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.372586 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.372901 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.872823 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.872906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:39.873307 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:40.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.373133 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.373501 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.872204 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.872270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.872526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.372331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.872493 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.372820 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:42.372870 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:42.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.372358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.372675 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.372764 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.373089 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:44.373137 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.873156 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.873500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.372553 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.372450 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.372523 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.372843 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.872247 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.872328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.872612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.872662 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:47.372273 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.872442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.872571 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.872914 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.372316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.872708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.872770 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:49.372262 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.872541 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.372663 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:51.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:51.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.872354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.872701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.372417 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.372845 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.872527 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.872603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.872927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.372268 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:53.372745 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.872425 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.872508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.872834 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.372720 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.372797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.373062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.872869 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.872951 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.373548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:55.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.872601 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.872374 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.872455 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.872814 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.372544 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.872786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.372890 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.872591 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.872679 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.873009 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.372810 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.372884 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.373220 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.872879 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.872969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.873321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:00.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.373766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.372378 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.872219 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.872561 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.372674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:02.372728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.372369 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.372442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.372744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.372647 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.372731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.373140 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:04.373195 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.872948 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.873032 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.873333 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.373154 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.373234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.373560 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.872279 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.872349 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.872425 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.872765 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.872824 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:07.372493 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.372568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.372917 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.872232 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.872304 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.872709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.372217 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:09.372636 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:09.872553 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.872630 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.873023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.372813 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.372913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.873108 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.873408 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.373293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:11.373634 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:11.872336 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.372228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.372302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.372577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.872294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.372401 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.372816 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.872551 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:13.872945 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:14.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.373321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.872852 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.372992 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.373066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.373324 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.873205 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.873281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.873678 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:16.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.372357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.372713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.872413 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.372482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:18.372852 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:18.872514 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.872616 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.872985 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.372573 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.372649 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.372999 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.872895 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.872975 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.873244 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.373182 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.373611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:20.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:20.872380 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.872463 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.872815 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.372512 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.372596 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.872254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.872331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.872674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.372410 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.372485 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.372838 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:22.872700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:23.372313 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.372431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.872457 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.872534 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.872889 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.372864 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.372934 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.373193 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.873012 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.873516 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:24.873575 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:25.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.372801 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.872339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.372699 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.872323 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.372339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.372411 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:27.372716 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:27.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.372837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.872230 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.872576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:29.372758 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:29.872531 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.872638 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.872972 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.372756 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.372841 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.373119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.872942 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.873350 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.373103 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.373183 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:31.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.872222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.872307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.872623 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.372375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.872367 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.872693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.372597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:33.872742 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:34.372680 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.372755 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.373097 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.872882 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.872958 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.873222 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.373010 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.373091 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.373434 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.873113 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.873189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.873528 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.873587 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:36.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.372619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.872253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.872672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.372647 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.872206 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.872274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.872529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:38.372720 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.872325 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.872409 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.872740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.372402 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.372775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.872763 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.872846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.873157 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.372823 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.372906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:40.373285 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.873058 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.873128 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.372149 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.372247 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.372579 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.372607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.872312 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.872392 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.872710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:42.872765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:43.372447 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.372542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.372852 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.872586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.372513 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.372585 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.372919 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.872748 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.872828 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.873215 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:45.372934 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.373011 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.373274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.873496 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.872225 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.872584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.372332 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.372633 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:47.372687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.372256 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.872433 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.872737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.372366 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.372695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:49.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.872713 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.872797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.873197 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.372974 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.373045 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.373414 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.872184 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.872263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.872626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.872387 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:51.872772 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:52.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.372289 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.372503 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.372578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.372841 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:54.372883 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:54.872831 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.873203 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.372953 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.373030 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.373369 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.873134 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.873209 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.873469 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.372169 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.372249 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.372599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.872338 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.872414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:56.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:57.372465 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.372538 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.372790 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.872250 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.872326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.872637 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.372760 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:59.872577 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.873052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.377171 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.377261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.377582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.872322 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.872642 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.872615 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:01.872654 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.372306 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.872696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.372342 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.372415 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.872358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:03.872747 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.872938 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.873008 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.873277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.373195 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.872300 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.872635 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.372224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.372666 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:06.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.872698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.372405 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.372492 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.372840 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.872529 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.872598 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.372280 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:08.372751 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:08.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.872352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.372420 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.872807 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.872889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.373055 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:10.373550 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:10.872220 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.872301 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.872593 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.372352 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.372759 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.872616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.872391 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:12.872789 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:13.372490 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.372574 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.372922 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.872608 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.872675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.872937 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.372532 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.372618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.373079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.872885 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.872973 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.873356 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:14.873435 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:15.372134 1255403 node_ready.go:38] duration metric: took 6m0.000083316s for node "functional-608344" to be "Ready" ...
	I1217 00:52:15.375301 1255403 out.go:203] 
	W1217 00:52:15.378227 1255403 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:52:15.378247 1255403 out.go:285] * 
	W1217 00:52:15.380407 1255403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:52:15.382698 1255403 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:22 functional-608344 containerd[5242]: time="2025-12-17T00:52:22.479479054Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.520372157Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.522488079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.529714624Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.530093558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.553269303Z" level=info msg="No images store for sha256:9036bf2657962274d57bf1ecb3ee331e146c93afba3fb164a6ce8fbb5db581df"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.555411777Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-608344\""
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.562404481Z" level=info msg="ImageCreate event name:\"sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.563042528Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.348497538Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.351055787Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.352987633Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.365256788Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.410289088Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.412568056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.420713963Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.421114501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.442491011Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.445001497Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.446979202Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.454692752Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.585450881Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.587652621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.598340425Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.598905420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:52:28.311552    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:28.312068    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:28.313688    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:28.314115    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:28.315539    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:52:28 up  6:34,  0 user,  load average: 0.14, 0.25, 0.87
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:52:25 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:25 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 17 00:52:25 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:25 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:25 functional-608344 kubelet[9054]: E1217 00:52:25.931914    9054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:25 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:25 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:26 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 17 00:52:26 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:26 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:26 functional-608344 kubelet[9142]: E1217 00:52:26.638414    9142 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:26 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:26 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:27 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 17 00:52:27 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:27 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:27 functional-608344 kubelet[9177]: E1217 00:52:27.431040    9177 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:27 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:27 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 17 00:52:28 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 kubelet[9234]: E1217 00:52:28.178409    9234 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (357.452438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-608344 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-608344 get pods: exit status 1 (103.322489ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-608344 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (321.768868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr                                                  │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:latest                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add minikube-local-cache-test:functional-608344                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache delete minikube-local-cache-test:functional-608344                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl images                                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ cache          │ functional-608344 cache reload                                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ kubectl        │ functional-608344 kubectl -- --context functional-608344 get pods                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:46:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:46:09.841325 1255403 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:46:09.841557 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841588 1255403 out.go:374] Setting ErrFile to fd 2...
	I1217 00:46:09.841608 1255403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:46:09.841909 1255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:46:09.842319 1255403 out.go:368] Setting JSON to false
	I1217 00:46:09.843208 1255403 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23320,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:46:09.843304 1255403 start.go:143] virtualization:  
	I1217 00:46:09.846714 1255403 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:46:09.849718 1255403 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:46:09.849800 1255403 notify.go:221] Checking for updates...
	I1217 00:46:09.855303 1255403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:46:09.858207 1255403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:09.860971 1255403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:46:09.863762 1255403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:46:09.866648 1255403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:46:09.869965 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:09.870075 1255403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:46:09.899794 1255403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:46:09.899910 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:09.954202 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:09.945326941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:09.954303 1255403 docker.go:319] overlay module found
	I1217 00:46:09.957332 1255403 out.go:179] * Using the docker driver based on existing profile
	I1217 00:46:09.960126 1255403 start.go:309] selected driver: docker
	I1217 00:46:09.960147 1255403 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:09.960238 1255403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:46:09.960367 1255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:46:10.027336 1255403 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:46:10.013273525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:46:10.027811 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:10.027879 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:10.027939 1255403 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:10.033595 1255403 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:46:10.036654 1255403 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:46:10.039839 1255403 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:46:10.042883 1255403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:46:10.042915 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:10.042969 1255403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:46:10.042980 1255403 cache.go:65] Caching tarball of preloaded images
	I1217 00:46:10.043067 1255403 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:46:10.043077 1255403 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:46:10.043192 1255403 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:46:10.064109 1255403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:46:10.064135 1255403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:46:10.064157 1255403 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:46:10.064192 1255403 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:46:10.064257 1255403 start.go:364] duration metric: took 41.379µs to acquireMachinesLock for "functional-608344"
	I1217 00:46:10.064326 1255403 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:46:10.064336 1255403 fix.go:54] fixHost starting: 
	I1217 00:46:10.064635 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:10.082218 1255403 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:46:10.082251 1255403 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:46:10.085538 1255403 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:46:10.085593 1255403 machine.go:94] provisionDockerMachine start ...
	I1217 00:46:10.085773 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.104030 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.104380 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.104395 1255403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:46:10.233303 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.233328 1255403 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:46:10.233404 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.250839 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.251149 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.251164 1255403 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:46:10.396645 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:46:10.396749 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.422445 1255403 main.go:143] libmachine: Using SSH client type: native
	I1217 00:46:10.422746 1255403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:46:10.422762 1255403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:46:10.553926 1255403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:46:10.553954 1255403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:46:10.554002 1255403 ubuntu.go:190] setting up certificates
	I1217 00:46:10.554025 1255403 provision.go:84] configureAuth start
	I1217 00:46:10.554113 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:10.571790 1255403 provision.go:143] copyHostCerts
	I1217 00:46:10.571842 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571886 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:46:10.571897 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:46:10.571976 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:46:10.572067 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572088 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:46:10.572098 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:46:10.572127 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:46:10.572172 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572192 1255403 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:46:10.572198 1255403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:46:10.572222 1255403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:46:10.572274 1255403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:46:10.693030 1255403 provision.go:177] copyRemoteCerts
	I1217 00:46:10.693099 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:46:10.693140 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.710526 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.805595 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:46:10.805709 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:46:10.823672 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:46:10.823734 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:46:10.841740 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:46:10.841805 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:46:10.859736 1255403 provision.go:87] duration metric: took 305.682111ms to configureAuth
	I1217 00:46:10.859764 1255403 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:46:10.859948 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:10.859960 1255403 machine.go:97] duration metric: took 774.357768ms to provisionDockerMachine
	I1217 00:46:10.859968 1255403 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:46:10.859979 1255403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:46:10.860038 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:46:10.860081 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:10.876877 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:10.973995 1255403 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:46:10.977418 1255403 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:46:10.977440 1255403 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:46:10.977445 1255403 command_runner.go:130] > VERSION_ID="12"
	I1217 00:46:10.977450 1255403 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:46:10.977468 1255403 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:46:10.977472 1255403 command_runner.go:130] > ID=debian
	I1217 00:46:10.977477 1255403 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:46:10.977482 1255403 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:46:10.977488 1255403 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:46:10.977542 1255403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:46:10.977565 1255403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:46:10.977576 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:46:10.977631 1255403 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:46:10.977740 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:46:10.977753 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /etc/ssl/certs/12112432.pem
	I1217 00:46:10.977836 1255403 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:46:10.977845 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> /etc/test/nested/copy/1211243/hosts
	I1217 00:46:10.977888 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:46:10.985858 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:11.003616 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:46:11.025062 1255403 start.go:296] duration metric: took 165.078815ms for postStartSetup
	I1217 00:46:11.025171 1255403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:46:11.025235 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.042501 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.135058 1255403 command_runner.go:130] > 18%
	I1217 00:46:11.135791 1255403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:46:11.141537 1255403 command_runner.go:130] > 159G
	I1217 00:46:11.142252 1255403 fix.go:56] duration metric: took 1.077909712s for fixHost
	I1217 00:46:11.142316 1255403 start.go:83] releasing machines lock for "functional-608344", held for 1.07800111s
	I1217 00:46:11.142412 1255403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:46:11.164178 1255403 ssh_runner.go:195] Run: cat /version.json
	I1217 00:46:11.164239 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.164497 1255403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:46:11.164553 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:11.196976 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.203865 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:11.389604 1255403 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:46:11.389719 1255403 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:46:11.389906 1255403 ssh_runner.go:195] Run: systemctl --version
	I1217 00:46:11.396314 1255403 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:46:11.396351 1255403 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:46:11.396781 1255403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:46:11.401747 1255403 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:46:11.401791 1255403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:46:11.401850 1255403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:46:11.410012 1255403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:46:11.410035 1255403 start.go:496] detecting cgroup driver to use...
	I1217 00:46:11.410068 1255403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:46:11.410119 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:46:11.427912 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:46:11.441702 1255403 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:46:11.441797 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:46:11.458922 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:46:11.473296 1255403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:46:11.602661 1255403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:46:11.727834 1255403 docker.go:234] disabling docker service ...
	I1217 00:46:11.727932 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:46:11.743775 1255403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:46:11.756449 1255403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:46:11.884208 1255403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:46:12.041744 1255403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:46:12.055323 1255403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:46:12.069025 1255403 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:46:12.070254 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:46:12.080613 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:46:12.090397 1255403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:46:12.090539 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:46:12.100248 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.110370 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:46:12.120135 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:46:12.130289 1255403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:46:12.139404 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:46:12.148731 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:46:12.158190 1255403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:46:12.167677 1255403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:46:12.175393 1255403 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:46:12.175487 1255403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:46:12.183394 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.301782 1255403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:46:12.439684 1255403 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:46:12.439765 1255403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:46:12.443346 1255403 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 00:46:12.443371 1255403 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:46:12.443378 1255403 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1217 00:46:12.443385 1255403 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:12.443391 1255403 command_runner.go:130] > Access: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443396 1255403 command_runner.go:130] > Modify: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443401 1255403 command_runner.go:130] > Change: 2025-12-17 00:46:12.390592502 +0000
	I1217 00:46:12.443405 1255403 command_runner.go:130] >  Birth: -
	I1217 00:46:12.443632 1255403 start.go:564] Will wait 60s for crictl version
	I1217 00:46:12.443703 1255403 ssh_runner.go:195] Run: which crictl
	I1217 00:46:12.446726 1255403 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:46:12.447174 1255403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:46:12.472886 1255403 command_runner.go:130] > Version:  0.1.0
	I1217 00:46:12.473228 1255403 command_runner.go:130] > RuntimeName:  containerd
	I1217 00:46:12.473244 1255403 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 00:46:12.473249 1255403 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:46:12.475292 1255403 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:46:12.475358 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.494552 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.496407 1255403 ssh_runner.go:195] Run: containerd --version
	I1217 00:46:12.517873 1255403 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 00:46:12.525827 1255403 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:46:12.528776 1255403 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:46:12.544531 1255403 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:46:12.548354 1255403 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:46:12.548680 1255403 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:46:12.548798 1255403 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:46:12.548865 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.573132 1255403 command_runner.go:130] > {
	I1217 00:46:12.573158 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.573163 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573172 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.573185 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573191 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.573195 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573199 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573208 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.573215 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573220 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.573226 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573230 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573234 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573237 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573252 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.573259 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573265 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.573268 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573273 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573284 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.573288 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573292 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.573296 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573300 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573306 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573310 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573323 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.573327 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573333 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.573339 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573350 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573361 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.573365 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573371 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.573376 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.573379 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573385 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573389 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573398 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.573404 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573409 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.573412 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573418 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573426 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.573432 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573437 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.573440 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573446 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573449 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573455 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573459 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573465 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573468 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573475 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.573478 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573484 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.573490 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573494 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573504 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.573508 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573512 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.573521 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573529 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573541 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573546 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573551 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573555 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573560 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573567 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.573574 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573580 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.573583 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573590 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573598 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.573605 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573609 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.573613 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573622 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573625 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573629 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573634 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573660 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573664 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573671 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.573681 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573690 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.573694 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573698 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573710 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.573714 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573719 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.573725 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573729 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573733 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573736 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573743 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.573753 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573759 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.573762 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573765 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573773 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.573776 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573784 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.573790 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573794 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.573800 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573804 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573816 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.573819 1255403 command_runner.go:130] >     },
	I1217 00:46:12.573822 1255403 command_runner.go:130] >     {
	I1217 00:46:12.573830 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.573836 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.573842 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.573845 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573851 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.573859 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.573864 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.573868 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.573875 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.573879 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.573884 1255403 command_runner.go:130] >       },
	I1217 00:46:12.573888 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.573894 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.573897 1255403 command_runner.go:130] >     }
	I1217 00:46:12.573900 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.573903 1255403 command_runner.go:130] > }
	I1217 00:46:12.574073 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.574086 1255403 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:46:12.574147 1255403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:46:12.596238 1255403 command_runner.go:130] > {
	I1217 00:46:12.596261 1255403 command_runner.go:130] >   "images":  [
	I1217 00:46:12.596266 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596284 1255403 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:46:12.596300 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596310 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:46:12.596314 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596318 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596329 1255403 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 00:46:12.596337 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596342 1255403 command_runner.go:130] >       "size":  "40636774",
	I1217 00:46:12.596346 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596353 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596356 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596362 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596372 1255403 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:46:12.596380 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596386 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:46:12.596389 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596393 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596402 1255403 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:46:12.596408 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596413 1255403 command_runner.go:130] >       "size":  "8034419",
	I1217 00:46:12.596417 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596422 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596427 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596432 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596442 1255403 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:46:12.596446 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596451 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:46:12.596457 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596464 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596472 1255403 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 00:46:12.596477 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596482 1255403 command_runner.go:130] >       "size":  "21168808",
	I1217 00:46:12.596486 1255403 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:46:12.596492 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596500 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596506 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596513 1255403 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:46:12.596518 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596523 1255403 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:46:12.596529 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596533 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596540 1255403 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1217 00:46:12.596547 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596551 1255403 command_runner.go:130] >       "size":  "21136588",
	I1217 00:46:12.596554 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596569 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596572 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596577 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596585 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596591 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596594 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596622 1255403 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:46:12.596626 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596638 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:46:12.596641 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596645 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596659 1255403 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1217 00:46:12.596662 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596667 1255403 command_runner.go:130] >       "size":  "24678359",
	I1217 00:46:12.596673 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596683 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596690 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596694 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596697 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596707 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596710 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596717 1255403 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:46:12.596726 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596733 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:46:12.596739 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596743 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596751 1255403 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1217 00:46:12.596755 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596761 1255403 command_runner.go:130] >       "size":  "20661043",
	I1217 00:46:12.596765 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596771 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596775 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596784 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596788 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596791 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596795 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596808 1255403 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:46:12.596813 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596818 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:46:12.596824 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596828 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596836 1255403 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:46:12.596839 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596847 1255403 command_runner.go:130] >       "size":  "22429671",
	I1217 00:46:12.596853 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596857 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596863 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596866 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596873 1255403 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:46:12.596879 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596885 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:46:12.596889 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596900 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596908 1255403 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1217 00:46:12.596914 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596923 1255403 command_runner.go:130] >       "size":  "15391364",
	I1217 00:46:12.596927 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.596931 1255403 command_runner.go:130] >         "value":  "0"
	I1217 00:46:12.596936 1255403 command_runner.go:130] >       },
	I1217 00:46:12.596940 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.596947 1255403 command_runner.go:130] >       "pinned":  false
	I1217 00:46:12.596950 1255403 command_runner.go:130] >     },
	I1217 00:46:12.596953 1255403 command_runner.go:130] >     {
	I1217 00:46:12.596960 1255403 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:46:12.596967 1255403 command_runner.go:130] >       "repoTags":  [
	I1217 00:46:12.596971 1255403 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:46:12.596975 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.596981 1255403 command_runner.go:130] >       "repoDigests":  [
	I1217 00:46:12.596989 1255403 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 00:46:12.596996 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.597000 1255403 command_runner.go:130] >       "size":  "267939",
	I1217 00:46:12.597004 1255403 command_runner.go:130] >       "uid":  {
	I1217 00:46:12.597008 1255403 command_runner.go:130] >         "value":  "65535"
	I1217 00:46:12.597013 1255403 command_runner.go:130] >       },
	I1217 00:46:12.597023 1255403 command_runner.go:130] >       "username":  "",
	I1217 00:46:12.597027 1255403 command_runner.go:130] >       "pinned":  true
	I1217 00:46:12.597030 1255403 command_runner.go:130] >     }
	I1217 00:46:12.597033 1255403 command_runner.go:130] >   ]
	I1217 00:46:12.597039 1255403 command_runner.go:130] > }
	I1217 00:46:12.599655 1255403 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:46:12.599676 1255403 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:46:12.599685 1255403 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:46:12.599841 1255403 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:46:12.599942 1255403 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:46:12.623140 1255403 command_runner.go:130] > {
	I1217 00:46:12.623159 1255403 command_runner.go:130] >   "cniconfig": {
	I1217 00:46:12.623164 1255403 command_runner.go:130] >     "Networks": [
	I1217 00:46:12.623168 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623173 1255403 command_runner.go:130] >         "Config": {
	I1217 00:46:12.623178 1255403 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 00:46:12.623184 1255403 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 00:46:12.623188 1255403 command_runner.go:130] >           "Plugins": [
	I1217 00:46:12.623192 1255403 command_runner.go:130] >             {
	I1217 00:46:12.623196 1255403 command_runner.go:130] >               "Network": {
	I1217 00:46:12.623200 1255403 command_runner.go:130] >                 "ipam": {},
	I1217 00:46:12.623205 1255403 command_runner.go:130] >                 "type": "loopback"
	I1217 00:46:12.623209 1255403 command_runner.go:130] >               },
	I1217 00:46:12.623214 1255403 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 00:46:12.623218 1255403 command_runner.go:130] >             }
	I1217 00:46:12.623221 1255403 command_runner.go:130] >           ],
	I1217 00:46:12.623230 1255403 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 00:46:12.623234 1255403 command_runner.go:130] >         },
	I1217 00:46:12.623239 1255403 command_runner.go:130] >         "IFName": "lo"
	I1217 00:46:12.623243 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623246 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623250 1255403 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 00:46:12.623253 1255403 command_runner.go:130] >     "PluginDirs": [
	I1217 00:46:12.623257 1255403 command_runner.go:130] >       "/opt/cni/bin"
	I1217 00:46:12.623260 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623265 1255403 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 00:46:12.623269 1255403 command_runner.go:130] >     "Prefix": "eth"
	I1217 00:46:12.623272 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623284 1255403 command_runner.go:130] >   "config": {
	I1217 00:46:12.623288 1255403 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 00:46:12.623292 1255403 command_runner.go:130] >       "/etc/cdi",
	I1217 00:46:12.623297 1255403 command_runner.go:130] >       "/var/run/cdi"
	I1217 00:46:12.623300 1255403 command_runner.go:130] >     ],
	I1217 00:46:12.623303 1255403 command_runner.go:130] >     "cni": {
	I1217 00:46:12.623306 1255403 command_runner.go:130] >       "binDir": "",
	I1217 00:46:12.623310 1255403 command_runner.go:130] >       "binDirs": [
	I1217 00:46:12.623314 1255403 command_runner.go:130] >         "/opt/cni/bin"
	I1217 00:46:12.623317 1255403 command_runner.go:130] >       ],
	I1217 00:46:12.623322 1255403 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 00:46:12.623325 1255403 command_runner.go:130] >       "confTemplate": "",
	I1217 00:46:12.623329 1255403 command_runner.go:130] >       "ipPref": "",
	I1217 00:46:12.623333 1255403 command_runner.go:130] >       "maxConfNum": 1,
	I1217 00:46:12.623337 1255403 command_runner.go:130] >       "setupSerially": false,
	I1217 00:46:12.623341 1255403 command_runner.go:130] >       "useInternalLoopback": false
	I1217 00:46:12.623344 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623352 1255403 command_runner.go:130] >     "containerd": {
	I1217 00:46:12.623356 1255403 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 00:46:12.623361 1255403 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 00:46:12.623366 1255403 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 00:46:12.623369 1255403 command_runner.go:130] >       "runtimes": {
	I1217 00:46:12.623372 1255403 command_runner.go:130] >         "runc": {
	I1217 00:46:12.623377 1255403 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 00:46:12.623381 1255403 command_runner.go:130] >           "PodAnnotations": null,
	I1217 00:46:12.623386 1255403 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 00:46:12.623391 1255403 command_runner.go:130] >           "cgroupWritable": false,
	I1217 00:46:12.623395 1255403 command_runner.go:130] >           "cniConfDir": "",
	I1217 00:46:12.623399 1255403 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 00:46:12.623403 1255403 command_runner.go:130] >           "io_type": "",
	I1217 00:46:12.623406 1255403 command_runner.go:130] >           "options": {
	I1217 00:46:12.623410 1255403 command_runner.go:130] >             "BinaryName": "",
	I1217 00:46:12.623414 1255403 command_runner.go:130] >             "CriuImagePath": "",
	I1217 00:46:12.623421 1255403 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 00:46:12.623426 1255403 command_runner.go:130] >             "IoGid": 0,
	I1217 00:46:12.623429 1255403 command_runner.go:130] >             "IoUid": 0,
	I1217 00:46:12.623434 1255403 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 00:46:12.623437 1255403 command_runner.go:130] >             "Root": "",
	I1217 00:46:12.623441 1255403 command_runner.go:130] >             "ShimCgroup": "",
	I1217 00:46:12.623445 1255403 command_runner.go:130] >             "SystemdCgroup": false
	I1217 00:46:12.623448 1255403 command_runner.go:130] >           },
	I1217 00:46:12.623453 1255403 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 00:46:12.623459 1255403 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 00:46:12.623463 1255403 command_runner.go:130] >           "runtimePath": "",
	I1217 00:46:12.623468 1255403 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 00:46:12.623473 1255403 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 00:46:12.623476 1255403 command_runner.go:130] >           "snapshotter": ""
	I1217 00:46:12.623479 1255403 command_runner.go:130] >         }
	I1217 00:46:12.623483 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623486 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623495 1255403 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 00:46:12.623500 1255403 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 00:46:12.623507 1255403 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 00:46:12.623511 1255403 command_runner.go:130] >     "disableApparmor": false,
	I1217 00:46:12.623517 1255403 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 00:46:12.623522 1255403 command_runner.go:130] >     "disableProcMount": false,
	I1217 00:46:12.623526 1255403 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 00:46:12.623530 1255403 command_runner.go:130] >     "enableCDI": true,
	I1217 00:46:12.623534 1255403 command_runner.go:130] >     "enableSelinux": false,
	I1217 00:46:12.623538 1255403 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 00:46:12.623542 1255403 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 00:46:12.623547 1255403 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 00:46:12.623551 1255403 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 00:46:12.623555 1255403 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 00:46:12.623559 1255403 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 00:46:12.623563 1255403 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 00:46:12.623571 1255403 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623576 1255403 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 00:46:12.623581 1255403 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 00:46:12.623585 1255403 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 00:46:12.623590 1255403 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 00:46:12.623593 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623596 1255403 command_runner.go:130] >   "features": {
	I1217 00:46:12.623601 1255403 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 00:46:12.623603 1255403 command_runner.go:130] >   },
	I1217 00:46:12.623607 1255403 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 00:46:12.623617 1255403 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623626 1255403 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 00:46:12.623630 1255403 command_runner.go:130] >   "runtimeHandlers": [
	I1217 00:46:12.623632 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623636 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623640 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623645 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623648 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623651 1255403 command_runner.go:130] >     },
	I1217 00:46:12.623654 1255403 command_runner.go:130] >     {
	I1217 00:46:12.623657 1255403 command_runner.go:130] >       "features": {
	I1217 00:46:12.623662 1255403 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 00:46:12.623666 1255403 command_runner.go:130] >         "user_namespaces": true
	I1217 00:46:12.623670 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623673 1255403 command_runner.go:130] >       "name": "runc"
	I1217 00:46:12.623676 1255403 command_runner.go:130] >     }
	I1217 00:46:12.623678 1255403 command_runner.go:130] >   ],
	I1217 00:46:12.623682 1255403 command_runner.go:130] >   "status": {
	I1217 00:46:12.623685 1255403 command_runner.go:130] >     "conditions": [
	I1217 00:46:12.623688 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623692 1255403 command_runner.go:130] >         "message": "",
	I1217 00:46:12.623695 1255403 command_runner.go:130] >         "reason": "",
	I1217 00:46:12.623699 1255403 command_runner.go:130] >         "status": true,
	I1217 00:46:12.623708 1255403 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 00:46:12.623711 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623714 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623721 1255403 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 00:46:12.623726 1255403 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 00:46:12.623729 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623733 1255403 command_runner.go:130] >         "type": "NetworkReady"
	I1217 00:46:12.623737 1255403 command_runner.go:130] >       },
	I1217 00:46:12.623739 1255403 command_runner.go:130] >       {
	I1217 00:46:12.623760 1255403 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 00:46:12.623766 1255403 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 00:46:12.623771 1255403 command_runner.go:130] >         "status": false,
	I1217 00:46:12.623776 1255403 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 00:46:12.623779 1255403 command_runner.go:130] >       }
	I1217 00:46:12.623782 1255403 command_runner.go:130] >     ]
	I1217 00:46:12.623784 1255403 command_runner.go:130] >   }
	I1217 00:46:12.623787 1255403 command_runner.go:130] > }
	I1217 00:46:12.625494 1255403 cni.go:84] Creating CNI manager for ""
	I1217 00:46:12.625564 1255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:46:12.625600 1255403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:46:12.625679 1255403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:46:12.625821 1255403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:46:12.625903 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:46:12.632727 1255403 command_runner.go:130] > kubeadm
	I1217 00:46:12.632744 1255403 command_runner.go:130] > kubectl
	I1217 00:46:12.632749 1255403 command_runner.go:130] > kubelet
	I1217 00:46:12.633544 1255403 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:46:12.633634 1255403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:46:12.641025 1255403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:46:12.653291 1255403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:46:12.665363 1255403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 00:46:12.678080 1255403 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:46:12.681502 1255403 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:46:12.681599 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:12.825775 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:13.622571 1255403 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:46:13.622593 1255403 certs.go:195] generating shared ca certs ...
	I1217 00:46:13.622609 1255403 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:13.622746 1255403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:46:13.622792 1255403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:46:13.622803 1255403 certs.go:257] generating profile certs ...
	I1217 00:46:13.622905 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:46:13.622962 1255403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:46:13.623005 1255403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:46:13.623018 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:46:13.623032 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:46:13.623044 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:46:13.623063 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:46:13.623080 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:46:13.623092 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:46:13.623103 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:46:13.623112 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:46:13.623163 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:46:13.623197 1255403 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:46:13.623208 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:46:13.623239 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:46:13.623268 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:46:13.623296 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:46:13.623339 1255403 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:46:13.623376 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem -> /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.623391 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.623403 1255403 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.630954 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:46:13.648792 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:46:13.668204 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:46:13.687794 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:46:13.706777 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:46:13.724521 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:46:13.741552 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:46:13.758610 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:46:13.775595 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:46:13.791737 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:46:13.808409 1255403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:46:13.825079 1255403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:46:13.838395 1255403 ssh_runner.go:195] Run: openssl version
	I1217 00:46:13.844664 1255403 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:46:13.845138 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.852395 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:46:13.860295 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864169 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864290 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.864356 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:46:13.907286 1255403 command_runner.go:130] > b5213941
	I1217 00:46:13.907795 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:46:13.915373 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.922487 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:46:13.929849 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933445 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933486 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.933532 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:46:13.974007 1255403 command_runner.go:130] > 51391683
	I1217 00:46:13.974086 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:46:13.981522 1255403 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.988760 1255403 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:46:13.996178 1255403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:46:13.999808 1255403 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000049 1255403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.000110 1255403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:46:14.042220 1255403 command_runner.go:130] > 3ec20f2e
	I1217 00:46:14.042784 1255403 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:46:14.050625 1255403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054447 1255403 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:46:14.054541 1255403 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:46:14.054555 1255403 command_runner.go:130] > Device: 259,1	Inode: 1315986     Links: 1
	I1217 00:46:14.054575 1255403 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:46:14.054585 1255403 command_runner.go:130] > Access: 2025-12-17 00:42:05.487679973 +0000
	I1217 00:46:14.054596 1255403 command_runner.go:130] > Modify: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054601 1255403 command_runner.go:130] > Change: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054606 1255403 command_runner.go:130] >  Birth: 2025-12-17 00:38:00.872734248 +0000
	I1217 00:46:14.054705 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:46:14.095552 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.096144 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:46:14.136799 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.137343 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:46:14.178363 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.178447 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:46:14.219183 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.219732 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:46:14.260450 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.260974 1255403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:46:14.301394 1255403 command_runner.go:130] > Certificate will not expire
	I1217 00:46:14.301907 1255403 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:46:14.302001 1255403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:46:14.302068 1255403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:46:14.331155 1255403 cri.go:89] found id: ""
	I1217 00:46:14.331262 1255403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:46:14.338208 1255403 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:46:14.338230 1255403 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:46:14.338237 1255403 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:46:14.339135 1255403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:46:14.339150 1255403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:46:14.339201 1255403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:46:14.346631 1255403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:46:14.347092 1255403 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.347204 1255403 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "functional-608344" cluster setting kubeconfig missing "functional-608344" context setting]
	I1217 00:46:14.347476 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.347923 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.348081 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.348643 1255403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:46:14.348662 1255403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:46:14.348668 1255403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:46:14.348676 1255403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:46:14.348680 1255403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:46:14.348726 1255403 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:46:14.348987 1255403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:46:14.356813 1255403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:46:14.356847 1255403 kubeadm.go:602] duration metric: took 17.690718ms to restartPrimaryControlPlane
	I1217 00:46:14.356857 1255403 kubeadm.go:403] duration metric: took 54.958395ms to StartCluster
	I1217 00:46:14.356874 1255403 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.356946 1255403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.357542 1255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:46:14.357832 1255403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 00:46:14.358027 1255403 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:46:14.358068 1255403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:46:14.358138 1255403 addons.go:70] Setting storage-provisioner=true in profile "functional-608344"
	I1217 00:46:14.358151 1255403 addons.go:239] Setting addon storage-provisioner=true in "functional-608344"
	I1217 00:46:14.358176 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.358595 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.359037 1255403 addons.go:70] Setting default-storageclass=true in profile "functional-608344"
	I1217 00:46:14.359062 1255403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-608344"
	I1217 00:46:14.359347 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.363164 1255403 out.go:179] * Verifying Kubernetes components...
	I1217 00:46:14.370109 1255403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:46:14.395757 1255403 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:46:14.395920 1255403 kapi.go:59] client config for functional-608344: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:46:14.396204 1255403 addons.go:239] Setting addon default-storageclass=true in "functional-608344"
	I1217 00:46:14.396233 1255403 host.go:66] Checking if "functional-608344" exists ...
	I1217 00:46:14.396651 1255403 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:46:14.400122 1255403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:46:14.403014 1255403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.403037 1255403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:46:14.403100 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.432348 1255403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:14.432368 1255403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:46:14.432430 1255403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:46:14.436192 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.459745 1255403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:46:14.589788 1255403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:46:14.612125 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:14.615872 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.372010 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372004 1255403 node_ready.go:35] waiting up to 6m0s for node "functional-608344" to be "Ready" ...
	W1217 00:46:15.372050 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372084 1255403 retry.go:31] will retry after 317.407291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372123 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.372180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.372127 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.372222 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372230 1255403 retry.go:31] will retry after 355.943922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.372458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:15.690082 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:15.728590 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:15.752296 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.756079 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.756112 1255403 retry.go:31] will retry after 490.658856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794006 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:15.794063 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.794090 1255403 retry.go:31] will retry after 355.367864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:15.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:15.872347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:15.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.150146 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.223269 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.227406 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.227444 1255403 retry.go:31] will retry after 644.228248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.247645 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.305567 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.309114 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.309147 1255403 retry.go:31] will retry after 583.888251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.372333 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.372417 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872396 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:16.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:16.872762 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:16.872991 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:16.894225 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:16.973490 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.973584 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.973617 1255403 retry.go:31] will retry after 498.903187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995507 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:16.995580 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:16.995609 1255403 retry.go:31] will retry after 1.192163017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.373109 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.373180 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.373508 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:17.373561 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:17.473767 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:17.533566 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:17.533674 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.533701 1255403 retry.go:31] will retry after 1.256860103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:17.873264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:17.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:17.873742 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.188247 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:18.252406 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.256687 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.256719 1255403 retry.go:31] will retry after 1.144811642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.373049 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.373118 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.373371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:18.790823 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:18.844402 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:18.847927 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.847962 1255403 retry.go:31] will retry after 2.632795947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:18.873097 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:18.873200 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:18.873479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:19.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.373274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.373606 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:19.373688 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:19.401757 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:19.461824 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:19.461875 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.461894 1255403 retry.go:31] will retry after 1.170153632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:19.872578 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:19.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:19.872951 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:20.633061 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:20.706366 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:20.706465 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.706522 1255403 retry.go:31] will retry after 4.067917735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:20.872741 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:20.872818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:20.873104 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.372963 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.373230 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:21.481608 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:21.538429 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:21.542236 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.542268 1255403 retry.go:31] will retry after 2.033886089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:21.872800 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:21.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:21.873226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:21.873281 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:22.372860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.372933 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.373246 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:22.872860 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:22.872932 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:22.873275 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.373010 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.373315 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:23.576715 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:23.645527 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:23.650062 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.650092 1255403 retry.go:31] will retry after 3.729491652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:23.872758 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:23.872840 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:23.873179 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:24.372935 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.373006 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:24.373329 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:24.774870 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:24.835617 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:24.839228 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.839262 1255403 retry.go:31] will retry after 3.072905013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:24.872619 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:24.872702 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:24.873062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.372995 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.373306 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:25.873005 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:25.873083 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:25.873336 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:26.373211 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.373294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.373696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:26.373764 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:26.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:26.872371 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:26.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.372236 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.372311 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.372626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.380005 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:27.448256 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.448292 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.448311 1255403 retry.go:31] will retry after 5.461633916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.872981 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:27.873109 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:27.873476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:27.912882 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:27.976246 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:27.976284 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:27.976302 1255403 retry.go:31] will retry after 5.882789745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:28.373014 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.373087 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.373404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:28.873209 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:28.873345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:28.873722 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:28.873779 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:29.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.372386 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.372743 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:29.872630 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:29.872744 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:29.873074 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.373208 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:30.872993 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:30.873065 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:30.873363 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:31.373163 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.373238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:31.373629 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:31.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:31.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:31.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.372266 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.372712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.872416 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:32.872562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:32.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:32.910180 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:32.967065 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:32.970705 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:32.970737 1255403 retry.go:31] will retry after 5.90385417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.372205 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.372281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.372548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:33.859276 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:33.872587 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:33.872665 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:33.872976 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:33.873029 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:33.917348 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:33.917388 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:33.917407 1255403 retry.go:31] will retry after 6.782848909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:34.373058 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.373482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:34.872326 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:34.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:34.872779 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.372469 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.372549 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.372888 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:35.872424 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:35.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:35.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:36.372415 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.372487 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.372800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:36.372853 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:36.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:36.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:36.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.372265 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.372352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.372682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:37.872370 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:37.872441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:37.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.372314 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:38.872216 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:38.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:38.872649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:38.872714 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:38.874746 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:38.934878 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:38.934918 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:38.934938 1255403 retry.go:31] will retry after 11.915569958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:39.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.372630 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:39.872679 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:39.872752 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:39.873071 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.372962 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.373071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:40.700947 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:46:40.758642 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:40.762387 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.762417 1255403 retry.go:31] will retry after 21.268770127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:40.872611 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:40.872685 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:40.872948 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:40.872988 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:41.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.372862 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:41.872999 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:41.873072 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:41.873406 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.373262 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.373529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:42.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:42.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:42.872690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:43.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:43.372775 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:43.872456 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:43.872527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:43.872851 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.372890 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.372962 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.373276 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:44.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:44.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:44.873274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:45.373130 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.373198 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.373481 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:45.373531 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:45.872183 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:45.872255 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:45.872577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.372267 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:46.872217 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:46.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:46.872602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.372685 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:47.872386 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:47.872467 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:47.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:47.872889 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:48.372210 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.372282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:48.872321 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:48.872397 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:48.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.372328 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.372788 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:49.872573 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:49.872652 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:49.872990 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:49.873044 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:50.372786 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.372858 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.850773 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:46:50.873153 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:50.873230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:50.873507 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:50.907175 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:46:50.910769 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:50.910800 1255403 retry.go:31] will retry after 16.247326027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:46:51.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.372590 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:51.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:51.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:51.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:52.372397 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:52.372848 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:52.872212 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:52.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:52.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.372690 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:53.872298 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:53.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:53.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:54.372770 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.372844 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.373109 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:54.373151 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:54.872853 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:54.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:54.873266 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.372618 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.373044 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:55.872847 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:55.872929 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:55.873202 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:56.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.373168 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:56.373526 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:56.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:56.872298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:56.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.372334 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.372403 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:57.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:57.872318 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:57.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:58.872429 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:58.872507 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:58.872821 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:46:58.872881 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:46:59.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:46:59.872494 1255403 type.go:168] "Request Body" body=""
	I1217 00:46:59.872570 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:46:59.872923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.372776 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:00.872467 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:00.872542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:00.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:00.873000 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:01.372521 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.372606 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.372957 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:01.872597 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:01.872682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:01.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.032382 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:02.090439 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:02.094499 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.094532 1255403 retry.go:31] will retry after 29.296113507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:02.372921 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.373278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:02.873066 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:02.873160 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:02.873482 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:02.873541 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:03.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.372609 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:03.872304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:03.872375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:03.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.372804 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.372891 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.373254 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:04.872959 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:04.873031 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:04.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:05.373086 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:05.373545 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:05.873124 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:05.873196 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:05.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.372203 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.372559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:06.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:06.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:06.872726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.159163 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:07.225140 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:07.225182 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.225201 1255403 retry.go:31] will retry after 37.614827372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:07.372479 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.372553 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:07.872303 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:07.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:07.872631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:07.872689 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:08.372299 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.372372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.372708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:08.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:08.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:08.872715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.372447 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.372796 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:09.872827 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:09.872905 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:09.873268 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:10.373092 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.373486 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:10.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:10.872225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:10.872500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.372299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:11.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:11.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:11.872706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:12.372374 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.372448 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:12.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:12.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:12.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:12.872684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.372393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:13.872165 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:13.872238 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:13.872504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:14.372605 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.372683 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.372964 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:14.373015 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:14.872900 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:14.872976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:14.873343 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.373252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.373582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:15.872301 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:15.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:15.872748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:16.872293 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:16.872394 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:16.872705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:16.872757 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:17.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.372831 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:17.872519 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:17.872615 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:17.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.372225 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:18.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:18.872399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:18.872750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:18.872814 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:19.372283 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.372698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:19.872700 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:19.872787 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:19.873056 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.372790 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.372868 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.373159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:20.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:20.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:20.873262 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:20.873320 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:21.372905 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.372976 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.373258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:21.873023 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:21.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:21.873458 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.373158 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.373237 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.373596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:22.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:22.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:22.872662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:23.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.372703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:23.372759 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:23.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:23.872374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:23.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.372755 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.372830 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.373106 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:24.873063 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:24.873147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:24.873454 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:25.373127 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.373530 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:25.373584 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:25.873160 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:25.873234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:25.873504 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.372211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.372293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.372638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:26.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:26.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:26.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.372392 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:27.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:27.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:27.872655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:27.872704 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:28.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.372383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:28.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:28.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:28.872629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.372366 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.372807 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:29.872715 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:29.872791 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:29.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:29.873212 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:30.372938 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.373018 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.373277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:30.873056 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:30.873139 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:30.873488 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.372198 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.372618 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:31.391812 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:47:31.449248 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:31.449293 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.449314 1255403 retry.go:31] will retry after 32.643249775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:47:31.872710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:31.872786 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:31.873055 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:32.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.372938 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.373285 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:32.373340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:32.873121 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:32.873217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:32.873546 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.372244 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.372605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:33.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:33.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:33.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:34.873008 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:34.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:34.873404 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:34.873456 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:35.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.373619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:35.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:35.872264 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:35.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:36.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:36.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:36.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:37.372434 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.372516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:37.372976 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:37.872362 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:37.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:37.872747 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.372444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.372518 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.372848 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:38.872586 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:38.872668 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:38.873000 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:39.372699 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.372776 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.373049 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:39.373096 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:39.872846 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:39.872924 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:39.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.373177 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.373253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.373595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:40.872211 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:40.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:40.872651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.372652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:41.872246 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:41.872325 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:41.872669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:41.872726 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:42.372381 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.372711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:42.872265 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:42.872338 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:42.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:43.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:43.872482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:43.872751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:43.872795 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:44.372769 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.372846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.373174 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.841021 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:47:44.872821 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:44.872904 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:44.873176 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:44.901181 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901219 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:47:44.901313 1255403 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:47:45.372791 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.372857 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:45.872997 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:45.873071 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:45.873409 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:45.873479 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:46.372167 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.372668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:46.872419 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:46.872488 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:46.872764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.372445 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.372517 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.372854 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:47.872444 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:47.872552 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:47.872905 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:48.372585 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.372659 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.372975 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:48.373027 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:48.872695 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:48.872773 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:48.873117 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.372676 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.372750 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.373076 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:49.872988 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:49.873056 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:49.873314 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:50.373106 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.373187 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.373532 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:50.373602 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:50.872306 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:50.872393 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:50.872755 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.372443 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.372822 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:51.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:51.872619 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:51.872982 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.372733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:52.872278 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:52.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:52.872651 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:53.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.372739 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:53.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:53.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:53.872729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.372572 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.372934 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:54.872837 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:54.872918 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:54.873258 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:54.873327 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:55.373081 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.373163 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:55.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:55.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.372327 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.372740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:56.872476 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:56.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:56.872974 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:57.372728 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.372818 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.373081 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:57.373130 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:47:57.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:57.872949 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:57.873273 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.373071 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.373147 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:58.872181 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:58.872282 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:58.872778 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.372499 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.372573 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.372928 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:47:59.872817 1255403 type.go:168] "Request Body" body=""
	I1217 00:47:59.872915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:47:59.873279 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:47:59.873340 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:00.373167 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.373598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:00.872322 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:00.872396 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:00.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.372325 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.372746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:01.872381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:01.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:02.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:02.372982 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:02.872650 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:02.872731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:02.873080 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.372870 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.372941 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.373206 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:03.872565 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:03.872662 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:03.872994 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:04.093431 1255403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:48:04.161956 1255403 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165693 1255403 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:48:04.165804 1255403 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:48:04.168987 1255403 out.go:179] * Enabled addons: 
	I1217 00:48:04.172517 1255403 addons.go:530] duration metric: took 1m49.814444692s for enable addons: enabled=[]
	I1217 00:48:04.372853 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.372931 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:04.373316 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:04.872985 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:04.873066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:04.873348 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.373121 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.373201 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.373539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:05.873175 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:05.873252 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:05.873567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.372269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.372345 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.372632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:06.872369 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:06.872456 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:06.872833 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:06.872898 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:07.372604 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.372696 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.373010 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:07.872787 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:07.872855 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:07.873139 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.372911 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.372993 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.373331 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:08.873147 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:08.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:08.873586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:08.873687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:09.373212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.373288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.373540 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:09.872555 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:09.872628 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:09.872945 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:10.372282 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:48:10.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:10.872369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:10.872634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:11.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.372364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.372756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:11.372815 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:11.872507 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:11.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:11.873053 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.372797 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.372889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.373152 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:12.872908 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:12.872978 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:12.873325 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:13.373184 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.373269 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.373620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:13.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:13.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:13.872636 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:13.873084 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.372598 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.372682 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.373038 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:14.872960 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:14.873043 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:14.873401 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.373180 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.373497 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:15.872199 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:15.872279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:15.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:15.872674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:16.372365 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.372441 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.372748 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:16.872398 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:16.872472 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:16.872844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.372350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.372683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:17.872389 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:17.872465 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:17.872803 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:17.872859 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:18.372488 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.372562 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.372894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:18.872257 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:18.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:18.872668 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.372327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:19.872501 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:19.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:19.872865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:19.872907 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:20.872498 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:20.872578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:20.872906 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.372218 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.372598 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:21.872319 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:21.872390 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:21.872727 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:22.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.372529 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.372835 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:22.372884 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:22.872512 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:22.872593 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:22.872860 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.372249 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.372651 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:23.872248 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:23.872324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:23.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.372486 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.372825 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:24.872830 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:24.872913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:24.873278 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:24.873332 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:25.373061 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.373140 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.373479 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:25.872158 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:25.872230 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:25.872535 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:26.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:26.872474 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:26.872823 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:27.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.372279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.372554 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:27.372599 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:27.872258 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:27.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:27.872678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.372395 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:28.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:28.872546 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:28.872837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:29.372522 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.372981 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:29.373040 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:29.872933 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:29.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:29.873371 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.372154 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.372225 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.372485 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:30.872188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:30.872261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:30.872617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.372737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:31.872313 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:31.872382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:31.872638 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:31.872679 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:32.372292 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.372650 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:32.872346 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:32.872430 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:32.872800 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.372247 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.372320 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.372612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:33.872344 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:33.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:33.872746 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:33.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:34.372760 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.372837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.373165 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:34.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:34.873107 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:34.873403 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.372796 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.372872 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.373196 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:35.873006 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:35.873085 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:35.873411 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:35.873470 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:36.372142 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.372217 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.372567 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:36.872286 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:36.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:36.872683 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.372367 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.372445 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.372772 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:37.872328 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:37.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:37.872704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:38.372276 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:38.372765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:38.872447 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:38.872533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:38.872877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.372316 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.372388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:39.872617 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:39.872700 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:39.873011 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:40.372798 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.372870 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.373242 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:40.373311 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:40.873046 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:40.873122 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:40.873375 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.373188 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.373263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.373570 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:41.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:41.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:41.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.372227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.372297 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.372622 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:42.872266 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:42.872342 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:42.872665 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:42.872728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:43.372382 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.372462 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:43.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:43.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:43.872545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.372527 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.372936 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:44.872877 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:44.872970 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:44.873320 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:44.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:45.373112 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.373189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.373444 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:45.872210 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:45.872286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:45.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.372285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:46.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:46.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:46.872604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:47.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:47.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:47.872245 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:47.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:47.872653 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.372340 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.372414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:48.872285 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:48.872370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:48.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:49.372492 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.372567 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.372913 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:49.372966 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:49.872750 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:49.872825 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:49.873079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.372866 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.372959 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.373332 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:50.873100 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:50.873177 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:50.873506 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.372287 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.372545 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:51.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:51.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:51.872736 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:51.872804 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:52.372475 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.372554 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.372896 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:52.872227 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:52.872302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:52.872605 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.372717 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:53.872428 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:53.872516 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:53.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:53.872960 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:54.372871 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.372942 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.373201 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:54.872863 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:54.872939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:54.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.373131 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.373475 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:55.873120 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:55.873191 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:55.873448 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:55.873490 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:56.372189 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.372265 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.372594 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:56.872333 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:56.872410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:56.872761 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.372437 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:57.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:57.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:57.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:48:58.372941 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:48:58.872213 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:58.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:58.872596 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.372693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:48:59.872686 1255403 type.go:168] "Request Body" body=""
	I1217 00:48:59.872770 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:48:59.873119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:00.372972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.373055 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:00.373445 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:00.873193 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:00.873272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:00.873619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.372350 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.372429 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.372764 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:01.872311 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:01.872383 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:01.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.372669 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:02.872388 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:02.872461 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:02.872782 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:02.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:03.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.372629 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:03.872320 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:03.872407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:03.872766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.372726 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.372819 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:04.872850 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:04.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:04.873211 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:04.873257 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:05.373033 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.373116 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.373435 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:05.872186 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:05.872288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:05.872632 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.372288 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.372541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:06.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:06.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:06.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:07.372231 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:07.372674 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:07.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:07.872292 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:07.872620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:08.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:08.872351 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:08.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:09.372406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.372481 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:09.372828 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:09.872733 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:09.872813 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:09.873142 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.372830 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.372915 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:10.872991 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:10.873061 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:10.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:11.373032 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.373103 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.373422 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:11.373476 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:11.872170 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:11.872253 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:11.872591 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.372645 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:12.872348 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:12.872424 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:12.872733 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.372664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:13.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:13.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:13.872559 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:13.872604 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:14.372593 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.372675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.373023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:14.872826 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:14.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:14.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.372930 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.372996 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.373251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:15.873029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:15.873114 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:15.873441 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:15.873499 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:16.372857 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.372939 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.373291 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:16.873054 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:16.873121 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:16.873381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.373207 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.373279 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.373602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:17.872327 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:17.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:17.872749 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:18.372423 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.372500 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.372828 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:18.372879 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:18.872517 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:18.872599 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:18.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.372645 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.372727 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.373052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:19.872972 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:19.873040 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:19.873299 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:20.373140 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.373222 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.373562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:20.373608 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:20.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:20.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:20.872744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.372226 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.372602 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:21.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:21.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:21.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.372710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:22.872402 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:22.872479 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:22.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:22.872899 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:23.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.372379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.372750 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:23.872449 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:23.872526 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:23.872900 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.372889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.372969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.373284 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:24.872889 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:24.872966 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:24.873311 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:24.873367 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:25.373118 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.373197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.373542 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:25.872223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:25.872294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:25.872610 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.372709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:26.872479 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:26.872558 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:26.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:27.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.372321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.372614 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:27.372676 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:27.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:27.872335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:27.872682 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.372259 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.372335 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:28.872235 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:28.872321 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:28.872639 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.372353 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.372446 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.372787 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:29.372846 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:29.872795 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:29.872878 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.873205 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.372902 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.372971 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.873091 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:30.873167 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.873517 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.372356 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.372729 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:31.872477 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.872758 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.872805 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:32.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.372347 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.872415 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:32.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.872791 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.372533 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.372866 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:33.872644 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.872953 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.873001 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.372931 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.373361 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.872791 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:34.872863 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.873115 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.372903 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.372977 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.373328 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.873103 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:35.873179 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:35.873583 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.372232 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.372312 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.372627 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:36.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.872692 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.372284 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.372706 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.872237 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:37.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.372274 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.372348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.372749 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:38.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.872850 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.372223 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.372290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.372539 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.872532 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:39.872618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.872954 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.372336 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.372418 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.372807 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:40.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:40.872334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.372246 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:41.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.872775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.372333 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.372608 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:42.872402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.872731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:42.872782 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.372494 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.372595 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.372923 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.872605 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:43.872678 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.873001 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.373029 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.373105 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.373459 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.873220 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:44.873305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.873597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:44.873668 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.372245 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.372641 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.872360 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:45.872431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.872757 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.372476 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.372555 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.372874 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.872352 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:46.872442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.872756 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.372427 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.372797 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.372843 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:47.872316 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:47.872400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.872738 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.372270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.372525 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.872233 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:48.872305 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.872652 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.372364 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.372440 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.372725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.872595 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:49.872677 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.872968 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.873012 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.372323 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.372400 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.872283 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:50.872357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.872695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.372362 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.372439 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.872417 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:51.872499 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.872793 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.372402 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.372781 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.372837 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.872499 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:52.872576 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.872861 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.372337 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.872406 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:53.872497 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.872880 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.372942 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.373033 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.373327 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.373380 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.872873 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:54.872946 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.873289 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.373144 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.373221 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.373534 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.872251 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:55.872319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.872613 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.372250 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:56.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.872664 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.872724 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.372361 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.372707 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:57.872486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.872824 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.372528 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.372603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.372963 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.872621 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:58.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.873021 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.873080 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.372773 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.372851 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.373182 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.873119 1255403 type.go:168] "Request Body" body=""
	I1217 00:49:59.873197 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.873526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.372349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.872368 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:00.872443 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.872754 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.372212 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.372296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.372662 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:01.372719 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.872244 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:01.872316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.872587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.372293 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.372385 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.372720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.872309 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:02.872388 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.372341 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.372412 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.372718 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.372786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:03.872557 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.872930 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.373171 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.373245 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.373565 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.872563 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:04.872640 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.872940 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.372260 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.372656 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.872400 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:05.872490 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.872830 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.872896 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:06.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.372336 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.372620 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.872307 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:06.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.872724 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.372442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.372532 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.372865 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:07.872303 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.872568 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.372604 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:08.372650 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:08.872368 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.372413 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.372486 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.372844 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.872786 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:09.872876 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.873227 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.372862 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.372935 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.373226 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:10.373272 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.872876 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:10.872953 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.873290 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.373089 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.373164 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.373473 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.873198 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:11.873284 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.372319 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.372395 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.372715 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.872471 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:12.872568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.872993 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:12.873048 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:13.372927 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.373005 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.873165 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:13.873240 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.873498 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.372301 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.372407 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.372871 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.872754 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:14.872837 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.873190 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.873248 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:15.372993 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.373063 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.373383 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.873087 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:15.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.873529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.373215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.373295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.373634 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:16.872308 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.872583 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.372302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.372382 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.372726 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:17.372792 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:17.872468 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:17.872545 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.872894 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.372588 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.372657 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.372927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.872288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:18.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.372239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.372315 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.372654 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.872564 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:19.872648 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.872949 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:19.873002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:20.372251 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.372689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:20.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.872700 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.372611 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.372691 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.372973 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.872655 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:21.872734 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.873073 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.873119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:22.372896 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.372972 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.373287 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.873079 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:22.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.373186 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.373280 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.373600 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:23.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.872716 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.372595 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.372669 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.372947 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:24.373002 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.872867 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:24.872947 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.873301 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.373095 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.373171 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.373509 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.872191 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:25.872266 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.872527 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.372330 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.872403 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:26.872502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.872836 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:26.872890 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:27.372542 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.372621 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.372944 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.872693 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:27.872780 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.873112 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.372992 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.373381 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.873156 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:28.873226 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.873541 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.873590 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:29.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.372731 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.872558 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:29.872635 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.872959 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.372576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.872271 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:30.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.872677 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.372257 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.372676 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:31.372730 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.872239 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:31.872317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.872595 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.372264 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.372339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:32.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.872735 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.372527 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.372826 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:33.372874 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:33.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.372580 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.372655 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.372987 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.872892 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:34.872961 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.873231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.372626 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.372701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.373063 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:35.373119 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.872891 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:35.872974 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.873309 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.373075 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.373152 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.373476 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.872187 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:36.872267 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.872563 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.372288 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.372369 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.872215 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:37.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.872562 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:37.872611 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:38.372261 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.372341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.372684 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.872399 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:38.872478 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.372517 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.372586 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.372901 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.872823 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:39.872906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:39.873307 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:40.373056 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.373133 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.373501 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.872204 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:40.872270 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.872526 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.372331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.372702 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.872408 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:41.872493 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.372459 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.372820 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:42.372870 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:42.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:42.872344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.872686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.372358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:43.872346 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.872611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.372675 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.372764 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.373089 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:44.373137 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:44.873156 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.873500 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.372221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.372553 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.872302 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:45.872380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.872728 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.372450 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.372523 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.372843 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.872247 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:46.872328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.872612 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.872662 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:47.372273 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.872442 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:47.872571 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.872914 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.372241 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.372316 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.372655 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:48.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.872708 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.872770 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:49.372262 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.372344 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.372671 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.872541 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:49.872614 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.872941 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.372279 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.372679 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:50.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.872703 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.372230 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.372317 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.372663 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:51.372718 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:51.872275 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:51.872354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.872701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.372417 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.372502 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.372845 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.872527 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:52.872603 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.872927 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.372268 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.372340 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.372686 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:53.372745 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.872425 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:53.872508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.872834 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.372720 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.372797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.373062 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.872869 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:54.872951 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.873319 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.373199 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.373548 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:55.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.872221 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:55.872291 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.872601 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.372324 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.872374 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:56.872455 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.872814 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.372213 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.372294 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.372544 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.872291 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:57.872365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.872713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.872786 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:58.372456 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.372537 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.372890 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.872591 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:58.872679 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.873009 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.372810 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.372884 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.373220 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.872879 1255403 type.go:168] "Request Body" body=""
	I1217 00:50:59.872969 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.873321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.873377 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:00.373203 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.373286 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.373766 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:00.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.872691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.372378 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.372454 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.372784 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.872219 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:01.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.872561 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.372253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.372334 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.372674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:02.372728 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:02.872349 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.372369 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.372442 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.372744 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.872284 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:03.872364 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.872725 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.372647 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.372731 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.373140 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:04.373195 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.872948 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:04.873032 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.873333 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.373154 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.373234 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.373560 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.872279 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:05.872360 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.372234 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.372307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.372617 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.872349 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:06.872425 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.872765 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.872824 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:07.372493 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.372568 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.372917 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.872232 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:07.872304 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.372286 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.372363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.372701 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:08.872361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.872709 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.372217 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.372584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:09.372636 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:09.872553 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:09.872630 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.873023 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.372813 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.372913 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.373250 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.873035 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:10.873108 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.873408 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.373213 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.373293 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.373587 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:11.373634 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:11.872336 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:11.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.372228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.372302 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.372577 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.872294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:12.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.372401 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.372476 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.372816 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.872477 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:13.872551 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.872892 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:13.872945 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:14.372917 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.372991 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.373321 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.872852 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:14.872927 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.873251 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.372992 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.373066 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.373324 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.873205 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:15.873281 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.873603 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.873678 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:16.372277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.372357 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.372649 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:16.872290 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.872599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.372374 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.372713 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.872413 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:17.872489 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.872839 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.372379 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.372482 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.372799 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:18.372852 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:18.872514 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:18.872616 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.872985 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.372573 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.372649 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.372999 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.872895 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:19.872975 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.873244 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.373182 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.373258 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.373611 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:20.373700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:20.872380 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:20.872463 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.872815 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.372512 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.372596 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.372877 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.872254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:21.872331 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.872674 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.372410 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.372485 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.372838 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.872260 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:22.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.872644 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:22.872700 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:23.372313 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.372431 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.372751 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.872457 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:23.872534 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.872889 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.372864 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.372934 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.373193 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.873012 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:24.873170 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.873516 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:24.873575 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:25.372307 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.372410 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.372801 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.872339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:25.872408 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.872741 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.372270 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.372353 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.372699 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.872323 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:26.872398 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.872734 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.372339 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.372411 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:27.372716 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:27.872282 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:27.872379 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.872720 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.372440 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.372513 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.372837 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.872230 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:28.872299 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.872576 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.372704 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:29.372758 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:29.872531 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:29.872638 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.872972 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.372756 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.372841 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.373119 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.872942 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:30.873016 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.873350 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.373103 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.373183 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:31.373609 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.872222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:31.872307 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.872623 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.372287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.372375 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.372723 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.872287 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:32.872367 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.872693 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.372238 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.372309 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.372597 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:33.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:33.872742 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:34.372680 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.372755 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.373097 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.872882 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:34.872958 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.873222 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.373010 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.373091 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.373434 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.873113 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:35.873189 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.873528 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.873587 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:36.372222 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.372298 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.372619 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.872253 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:36.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.872672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.372242 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.372319 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.372647 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.872206 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:37.872274 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.872529 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.372243 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.372658 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:38.372720 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.872325 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:38.872409 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.872740 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.372402 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.372473 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.372775 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.872763 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:39.872846 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.873157 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.372823 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.372906 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.373231 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:40.373285 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.873058 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:40.873128 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.873431 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.372149 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.372247 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.372579 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.872273 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:41.872350 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.372258 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.372329 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.372607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.872312 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:42.872392 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.872710 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:42.872765 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:43.372447 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.372542 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.372852 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.872255 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:43.872323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.872586 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.372513 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.372585 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.372919 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.872748 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:44.872828 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.873159 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.873215 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:45.372934 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.373011 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.373274 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.873076 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:45.873158 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.873496 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.372197 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.372272 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.872225 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:46.872296 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.872584 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.372254 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.372332 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.372633 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:47.372687 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.872267 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:47.872341 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.872687 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.372256 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.372323 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.372585 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.872299 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:48.872433 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.872737 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.372294 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.372366 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.372695 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:49.372750 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.872713 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:49.872797 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.873197 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.372974 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.373045 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.373414 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.872184 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:50.872263 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.872626 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.372304 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.372381 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.372666 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.872281 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:51.872387 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.872719 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:51.872772 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:52.372290 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.372361 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.372678 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.872228 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:52.872327 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.872607 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.372289 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.372365 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.372672 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.872259 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:53.872339 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.872680 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.372503 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.372578 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.372841 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:54.372883 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:54.872831 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:54.872903 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.873203 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.372953 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.373030 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.373369 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.873134 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:55.873209 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.873469 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.372169 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.372249 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.372599 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.872338 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:56.872414 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.872773 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:56.872838 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:57.372465 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.372538 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.372790 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.872277 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:57.872363 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.872711 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.372305 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.372399 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.372770 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.872250 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:58.872326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.872637 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.372278 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.372354 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.372705 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.372760 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:59.872577 1255403 type.go:168] "Request Body" body=""
	I1217 00:51:59.872701 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.873052 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.377171 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.377261 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.377582 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.872249 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:00.872322 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.872642 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.372248 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.372326 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.872300 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:01.872372 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.872615 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:01.872654 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.372306 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.372380 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.372696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:02.872359 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.872696 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.372342 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.372415 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.372691 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.872274 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:03.872358 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.872689 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:03.872747 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.372710 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.372788 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.373166 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.872938 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:04.873008 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.873277 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.373122 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.373195 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.373512 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.872224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:05.872300 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.872635 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.372224 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.372295 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.372616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.372666 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:06.872296 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:06.872378 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.872698 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.372405 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.372492 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.372840 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.872529 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:07.872598 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.872872 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.372280 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.372370 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.372694 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:08.372751 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:08.872269 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:08.872352 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.872712 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.372420 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.372508 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.372887 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.872807 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:09.872889 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.873212 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.373055 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.373145 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.373487 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:10.373550 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:10.872220 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:10.872301 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.872593 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.372352 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.372434 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.372759 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.872270 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:11.872348 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.872616 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.372252 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.372328 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.372631 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.872308 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:12.872391 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.872730 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:12.872789 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:13.372490 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.372574 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.372922 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.872608 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:13.872675 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.872937 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.372532 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.372618 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.373079 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.872885 1255403 type.go:168] "Request Body" body=""
	I1217 00:52:14.872973 1255403 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-608344" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.873356 1255403 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:14.873435 1255403 node_ready.go:55] error getting node "functional-608344" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-608344": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:15.372134 1255403 node_ready.go:38] duration metric: took 6m0.000083316s for node "functional-608344" to be "Ready" ...
	I1217 00:52:15.375301 1255403 out.go:203] 
	W1217 00:52:15.378227 1255403 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:52:15.378247 1255403 out.go:285] * 
	W1217 00:52:15.380407 1255403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:52:15.382698 1255403 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:22 functional-608344 containerd[5242]: time="2025-12-17T00:52:22.479479054Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.520372157Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.522488079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.529714624Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:23 functional-608344 containerd[5242]: time="2025-12-17T00:52:23.530093558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.553269303Z" level=info msg="No images store for sha256:9036bf2657962274d57bf1ecb3ee331e146c93afba3fb164a6ce8fbb5db581df"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.555411777Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-608344\""
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.562404481Z" level=info msg="ImageCreate event name:\"sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:24 functional-608344 containerd[5242]: time="2025-12-17T00:52:24.563042528Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.348497538Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.351055787Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.352987633Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 17 00:52:25 functional-608344 containerd[5242]: time="2025-12-17T00:52:25.365256788Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.410289088Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.412568056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.420713963Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.421114501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.442491011Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.445001497Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.446979202Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.454692752Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.585450881Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.587652621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.598340425Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 00:52:26 functional-608344 containerd[5242]: time="2025-12-17T00:52:26.598905420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:52:30.551461    9409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:30.551931    9409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:30.553818    9409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:30.554306    9409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:52:30.556271    9409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:52:30 up  6:35,  0 user,  load average: 0.61, 0.34, 0.90
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:52:27 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 17 00:52:28 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 kubelet[9234]: E1217 00:52:28.178409    9234 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 17 00:52:28 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:28 functional-608344 kubelet[9283]: E1217 00:52:28.945269    9283 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:28 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:29 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 17 00:52:29 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:29 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:29 functional-608344 kubelet[9317]: E1217 00:52:29.621979    9317 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:29 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:29 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:52:30 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 17 00:52:30 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:30 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:52:30 functional-608344 kubelet[9378]: E1217 00:52:30.447124    9378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:52:30 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:52:30 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (338.816195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:55:09.434040 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:56:56.883397 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:58:19.943807 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:09.433234 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:56.877705 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m11.844746781s)

                                                
                                                
-- stdout --
	* [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243331s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m11.846034263s for "functional-608344" cluster.
I1217 01:04:43.328770 1211243 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (299.132921ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:latest                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add minikube-local-cache-test:functional-608344                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache delete minikube-local-cache-test:functional-608344                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl images                                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ cache          │ functional-608344 cache reload                                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ kubectl        │ functional-608344 kubectl -- --context functional-608344 get pods                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ start          │ -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:52:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:52:31.527617 1261197 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:31.527758 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527763 1261197 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:31.527767 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527997 1261197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:52:31.528338 1261197 out.go:368] Setting JSON to false
	I1217 00:52:31.529124 1261197 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23702,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:52:31.529179 1261197 start.go:143] virtualization:  
	I1217 00:52:31.532534 1261197 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:52:31.537145 1261197 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:52:31.537272 1261197 notify.go:221] Checking for updates...
	I1217 00:52:31.542910 1261197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:31.545800 1261197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:52:31.548609 1261197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:52:31.551556 1261197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:52:31.554346 1261197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:31.557970 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:31.558066 1261197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:31.587498 1261197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:52:31.587608 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.650823 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.641966313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.650910 1261197 docker.go:319] overlay module found
	I1217 00:52:31.653844 1261197 out.go:179] * Using the docker driver based on existing profile
	I1217 00:52:31.656662 1261197 start.go:309] selected driver: docker
	I1217 00:52:31.656669 1261197 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.656773 1261197 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:31.656888 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.710052 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.70077893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.710641 1261197 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:52:31.710676 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:31.710788 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:31.710847 1261197 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.713993 1261197 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:52:31.716755 1261197 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:52:31.719575 1261197 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:52:31.722367 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:31.722402 1261197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:52:31.722423 1261197 cache.go:65] Caching tarball of preloaded images
	I1217 00:52:31.722451 1261197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:52:31.722505 1261197 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:52:31.722513 1261197 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:52:31.722616 1261197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:52:31.740561 1261197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:52:31.740571 1261197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:52:31.740584 1261197 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:52:31.740613 1261197 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:52:31.740665 1261197 start.go:364] duration metric: took 37.006µs to acquireMachinesLock for "functional-608344"
	I1217 00:52:31.740682 1261197 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:52:31.740687 1261197 fix.go:54] fixHost starting: 
	I1217 00:52:31.740957 1261197 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:52:31.756910 1261197 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:52:31.756929 1261197 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:52:31.760018 1261197 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:52:31.760042 1261197 machine.go:94] provisionDockerMachine start ...
	I1217 00:52:31.760119 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.776640 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.776960 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.776966 1261197 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:52:31.905356 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:31.905370 1261197 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:52:31.905445 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.925834 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.926164 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.926177 1261197 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:52:32.067014 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:32.067088 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.084172 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:32.084485 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:32.084499 1261197 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:52:32.214216 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:52:32.214232 1261197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:52:32.214253 1261197 ubuntu.go:190] setting up certificates
	I1217 00:52:32.214268 1261197 provision.go:84] configureAuth start
	I1217 00:52:32.214325 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:32.232515 1261197 provision.go:143] copyHostCerts
	I1217 00:52:32.232580 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:52:32.232588 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:52:32.232671 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:52:32.232772 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:52:32.232776 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:52:32.232801 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:52:32.232878 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:52:32.232885 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:52:32.232913 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:52:32.232967 1261197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:52:32.616759 1261197 provision.go:177] copyRemoteCerts
	I1217 00:52:32.616824 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:52:32.616864 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.638193 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.737540 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:52:32.755258 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:52:32.772709 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:52:32.791423 1261197 provision.go:87] duration metric: took 577.141949ms to configureAuth
	I1217 00:52:32.791441 1261197 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:52:32.791635 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:32.791640 1261197 machine.go:97] duration metric: took 1.031594088s to provisionDockerMachine
	I1217 00:52:32.791646 1261197 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:52:32.791656 1261197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:52:32.791701 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:52:32.791750 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.809559 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.905557 1261197 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:52:32.908787 1261197 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:52:32.908827 1261197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:52:32.908837 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:52:32.908891 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:52:32.908975 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:52:32.909048 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:52:32.909089 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:52:32.916399 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:32.933317 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:52:32.950047 1261197 start.go:296] duration metric: took 158.386583ms for postStartSetup
	I1217 00:52:32.950118 1261197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:52:32.950170 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.968857 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.062653 1261197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:52:33.067278 1261197 fix.go:56] duration metric: took 1.32658398s for fixHost
	I1217 00:52:33.067294 1261197 start.go:83] releasing machines lock for "functional-608344", held for 1.326621929s
	I1217 00:52:33.067361 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:33.084000 1261197 ssh_runner.go:195] Run: cat /version.json
	I1217 00:52:33.084040 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.084288 1261197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:52:33.084348 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.108566 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.111371 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.289488 1261197 ssh_runner.go:195] Run: systemctl --version
	I1217 00:52:33.296034 1261197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:52:33.300233 1261197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:52:33.300292 1261197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:52:33.307943 1261197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:52:33.307957 1261197 start.go:496] detecting cgroup driver to use...
	I1217 00:52:33.307988 1261197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:52:33.308034 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:52:33.325973 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:52:33.341243 1261197 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:52:33.341313 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:52:33.357700 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:52:33.373469 1261197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:52:33.498827 1261197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:52:33.614529 1261197 docker.go:234] disabling docker service ...
	I1217 00:52:33.614598 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:52:33.629592 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:52:33.642692 1261197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:52:33.771770 1261197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:52:33.894226 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:52:33.907337 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:52:33.922634 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:52:33.932171 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:52:33.941438 1261197 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:52:33.941508 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:52:33.950063 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.958782 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:52:33.967078 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.975466 1261197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:52:33.983339 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:52:33.991895 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:52:34.000351 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:52:34.010891 1261197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:52:34.018879 1261197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:52:34.026594 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.150165 1261197 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:52:34.299897 1261197 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:52:34.299958 1261197 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:52:34.303895 1261197 start.go:564] Will wait 60s for crictl version
	I1217 00:52:34.303948 1261197 ssh_runner.go:195] Run: which crictl
	I1217 00:52:34.307381 1261197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:52:34.334814 1261197 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:52:34.334888 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.355644 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.381331 1261197 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:52:34.384165 1261197 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:52:34.399831 1261197 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:52:34.407243 1261197 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:52:34.410160 1261197 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:52:34.410312 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:34.410394 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.434882 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.434894 1261197 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:52:34.434955 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.460154 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.460166 1261197 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:52:34.460173 1261197 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:52:34.460276 1261197 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:52:34.460340 1261197 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:52:34.485418 1261197 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:52:34.485440 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:34.485447 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:34.485462 1261197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:52:34.485483 1261197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:52:34.485591 1261197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:52:34.485688 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:52:34.493475 1261197 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:52:34.493536 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:52:34.501738 1261197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:52:34.515117 1261197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:52:34.528350 1261197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1217 00:52:34.541325 1261197 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:52:34.545027 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.663222 1261197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:52:34.871198 1261197 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:52:34.871209 1261197 certs.go:195] generating shared ca certs ...
	I1217 00:52:34.871223 1261197 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:52:34.871350 1261197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:52:34.871405 1261197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:52:34.871411 1261197 certs.go:257] generating profile certs ...
	I1217 00:52:34.871503 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:52:34.871558 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:52:34.871595 1261197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:52:34.871710 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:52:34.871738 1261197 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:52:34.871746 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:52:34.871770 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:52:34.871791 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:52:34.871819 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:52:34.871867 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:34.872533 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:52:34.890674 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:52:34.908252 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:52:34.925752 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:52:34.942982 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:52:34.961072 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:52:34.978793 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:52:34.995794 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:52:35.016106 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:52:35.035474 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:52:35.054248 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:52:35.072025 1261197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:52:35.085836 1261197 ssh_runner.go:195] Run: openssl version
	I1217 00:52:35.092498 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.100138 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:52:35.107992 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111748 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111805 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.153206 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:52:35.161118 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.168560 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:52:35.176276 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180431 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180496 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.224274 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:52:35.231870 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.239209 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:52:35.246988 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250581 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250708 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.291833 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:52:35.299197 1261197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:52:35.302994 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:52:35.343876 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:52:35.384935 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:52:35.425945 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:52:35.468160 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:52:35.509040 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:52:35.549950 1261197 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:35.550030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:52:35.550101 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.575493 1261197 cri.go:89] found id: ""
	I1217 00:52:35.575551 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:52:35.583488 1261197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:52:35.583498 1261197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:52:35.583562 1261197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:52:35.590939 1261197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.591435 1261197 kubeconfig.go:125] found "functional-608344" server: "https://192.168.49.2:8441"
	I1217 00:52:35.592674 1261197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:52:35.600478 1261197 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:38:00.276726971 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:52:34.535031442 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:52:35.600490 1261197 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:52:35.600503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 00:52:35.600556 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.635394 1261197 cri.go:89] found id: ""
	I1217 00:52:35.635452 1261197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:52:35.655954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:52:35.664843 1261197 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 00:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 00:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 17 00:42 /etc/kubernetes/scheduler.conf
	
	I1217 00:52:35.664920 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:52:35.673926 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:52:35.681783 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.681837 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:52:35.689482 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.698370 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.698438 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.705988 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:52:35.714414 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.714484 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:52:35.722072 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:52:35.729848 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:35.776855 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.711300 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.926722 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.999232 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:37.047947 1261197 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:52:37.048019 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:37.548207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.048861 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.048206 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.548189 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.049366 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.548557 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.048152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.549106 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.048793 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.549138 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.049014 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.048840 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.048979 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.549120 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.049193 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.548932 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.048207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.548119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.048127 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.548295 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.049080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.548771 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.048210 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.548773 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.048258 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.549096 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.048188 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.049033 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.549038 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.048512 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.548619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.048253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.549044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.048294 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.548919 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.048218 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.548855 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.048880 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.548221 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.048194 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.548710 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.048613 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.548834 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.049119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.548167 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.048599 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.549080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.048587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.548846 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.048217 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.049020 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.548398 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.049097 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.548960 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.049065 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.548376 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.048388 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.548808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.048244 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.548239 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.049099 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.549083 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.549030 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.048350 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.548287 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.048923 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.048292 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.549092 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.048874 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.549144 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.048777 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.548153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.048868 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.548124 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.048936 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.048238 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.048954 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.548662 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.049044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.548942 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.048968 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.548787 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.048489 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.548243 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.549178 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.048993 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.548676 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.049104 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.048853 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.549118 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.048215 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.549153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.048154 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.549126 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.048949 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.048782 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.548760 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.048205 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.049183 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.548231 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.549031 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.048208 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.548852 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:37.048332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:37.048420 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:37.076924 1261197 cri.go:89] found id: ""
	I1217 00:53:37.076939 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.076947 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:37.076953 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:37.077010 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:37.103936 1261197 cri.go:89] found id: ""
	I1217 00:53:37.103950 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.103957 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:37.103962 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:37.104019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:37.134578 1261197 cri.go:89] found id: ""
	I1217 00:53:37.134592 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.134599 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:37.134605 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:37.134667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:37.162973 1261197 cri.go:89] found id: ""
	I1217 00:53:37.162986 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.162994 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:37.162999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:37.163063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:37.193768 1261197 cri.go:89] found id: ""
	I1217 00:53:37.193782 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.193789 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:37.193794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:37.193864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:37.217378 1261197 cri.go:89] found id: ""
	I1217 00:53:37.217391 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.217398 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:37.217403 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:37.217464 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:37.245938 1261197 cri.go:89] found id: ""
	I1217 00:53:37.245952 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.245959 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:37.245967 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:37.245977 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:37.303279 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:37.303297 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:37.317809 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:37.317826 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:37.378847 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:37.378858 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:37.378870 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:37.440776 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:37.440795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:39.970536 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:39.980652 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:39.980714 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:40.014928 1261197 cri.go:89] found id: ""
	I1217 00:53:40.014943 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.014950 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:40.014956 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:40.015027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:40.044249 1261197 cri.go:89] found id: ""
	I1217 00:53:40.044284 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.044292 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:40.044299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:40.044375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:40.071071 1261197 cri.go:89] found id: ""
	I1217 00:53:40.071086 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.071094 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:40.071100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:40.071166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:40.096922 1261197 cri.go:89] found id: ""
	I1217 00:53:40.096936 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.096944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:40.096950 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:40.097019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:40.126209 1261197 cri.go:89] found id: ""
	I1217 00:53:40.126223 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.126231 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:40.126237 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:40.126302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:40.166443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.166457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.166465 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:40.166470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:40.166532 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:40.194443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.194457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.194465 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:40.194472 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:40.194483 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:40.249960 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:40.249980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:40.264714 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:40.264730 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:40.334158 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:40.334168 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:40.334179 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:40.396176 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:40.396196 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:42.927525 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:42.939255 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:42.939317 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:42.967766 1261197 cri.go:89] found id: ""
	I1217 00:53:42.967780 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.967788 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:42.967793 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:42.967852 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:42.992216 1261197 cri.go:89] found id: ""
	I1217 00:53:42.992230 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.992238 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:42.992244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:42.992301 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:43.018174 1261197 cri.go:89] found id: ""
	I1217 00:53:43.018188 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.018196 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:43.018201 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:43.018260 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:43.043673 1261197 cri.go:89] found id: ""
	I1217 00:53:43.043687 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.043695 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:43.043701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:43.043763 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:43.067990 1261197 cri.go:89] found id: ""
	I1217 00:53:43.068005 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.068012 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:43.068017 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:43.068079 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:43.093908 1261197 cri.go:89] found id: ""
	I1217 00:53:43.093923 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.093930 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:43.093936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:43.093995 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:43.120199 1261197 cri.go:89] found id: ""
	I1217 00:53:43.120213 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.120220 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:43.120228 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:43.120238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:43.181971 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:43.181989 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:43.197524 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:43.197541 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:43.261336 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:43.261356 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:43.261366 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:43.322519 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:43.322538 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:45.852691 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:45.863769 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:45.863831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:45.888335 1261197 cri.go:89] found id: ""
	I1217 00:53:45.888350 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.888357 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:45.888363 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:45.888422 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:45.918194 1261197 cri.go:89] found id: ""
	I1217 00:53:45.918209 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.918216 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:45.918222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:45.918285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:45.943809 1261197 cri.go:89] found id: ""
	I1217 00:53:45.943824 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.943831 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:45.943836 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:45.943893 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:45.969167 1261197 cri.go:89] found id: ""
	I1217 00:53:45.969182 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.969189 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:45.969195 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:45.969261 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:45.995411 1261197 cri.go:89] found id: ""
	I1217 00:53:45.995425 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.995432 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:45.995437 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:45.995495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:46.025138 1261197 cri.go:89] found id: ""
	I1217 00:53:46.025153 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.025161 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:46.025167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:46.025230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:46.052563 1261197 cri.go:89] found id: ""
	I1217 00:53:46.052578 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.052585 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:46.052594 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:46.052604 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:46.110268 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:46.110286 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:46.128213 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:46.128230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:46.211985 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:46.212008 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:46.212018 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:46.274022 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:46.274041 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:48.809808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:48.820115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:48.820172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:48.846046 1261197 cri.go:89] found id: ""
	I1217 00:53:48.846062 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.846069 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:48.846075 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:48.846145 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:48.871706 1261197 cri.go:89] found id: ""
	I1217 00:53:48.871721 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.871728 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:48.871734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:48.871794 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:48.896325 1261197 cri.go:89] found id: ""
	I1217 00:53:48.896341 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.896348 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:48.896353 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:48.896413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:48.922321 1261197 cri.go:89] found id: ""
	I1217 00:53:48.922335 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.922342 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:48.922348 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:48.922406 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:48.951311 1261197 cri.go:89] found id: ""
	I1217 00:53:48.951325 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.951332 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:48.951337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:48.951395 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:48.976196 1261197 cri.go:89] found id: ""
	I1217 00:53:48.976211 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.976218 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:48.976224 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:48.976285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:49.005156 1261197 cri.go:89] found id: ""
	I1217 00:53:49.005173 1261197 logs.go:282] 0 containers: []
	W1217 00:53:49.005181 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:49.005190 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:49.005202 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:49.067318 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:49.067385 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:49.083407 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:49.083424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:49.159947 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:49.159958 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:49.159970 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:49.230934 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:49.230956 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:51.761379 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:51.771759 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:51.771821 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:51.796369 1261197 cri.go:89] found id: ""
	I1217 00:53:51.796384 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.796391 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:51.796396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:51.796454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:51.822318 1261197 cri.go:89] found id: ""
	I1217 00:53:51.822333 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.822340 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:51.822345 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:51.822409 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:51.847395 1261197 cri.go:89] found id: ""
	I1217 00:53:51.847409 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.847416 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:51.847421 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:51.847479 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:51.872529 1261197 cri.go:89] found id: ""
	I1217 00:53:51.872544 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.872552 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:51.872557 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:51.872619 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:51.900871 1261197 cri.go:89] found id: ""
	I1217 00:53:51.900885 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.900893 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:51.900898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:51.900967 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:51.928534 1261197 cri.go:89] found id: ""
	I1217 00:53:51.928548 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.928555 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:51.928560 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:51.928621 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:51.957597 1261197 cri.go:89] found id: ""
	I1217 00:53:51.957611 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.957619 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:51.957627 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:51.957636 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:52.016924 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:52.016945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:52.033440 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:52.033458 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:52.106352 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:52.106373 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:52.106384 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:52.173915 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:52.173934 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:54.703159 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:54.713797 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:54.713862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:54.739273 1261197 cri.go:89] found id: ""
	I1217 00:53:54.739287 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.739294 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:54.739299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:54.739355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:54.770340 1261197 cri.go:89] found id: ""
	I1217 00:53:54.770355 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.770362 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:54.770367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:54.770430 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:54.795583 1261197 cri.go:89] found id: ""
	I1217 00:53:54.795597 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.795604 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:54.795611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:54.795670 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:54.823673 1261197 cri.go:89] found id: ""
	I1217 00:53:54.823688 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.823696 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:54.823701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:54.823760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:54.849899 1261197 cri.go:89] found id: ""
	I1217 00:53:54.849913 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.849921 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:54.849927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:54.849986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:54.874746 1261197 cri.go:89] found id: ""
	I1217 00:53:54.874761 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.874767 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:54.874773 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:54.874831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:54.898944 1261197 cri.go:89] found id: ""
	I1217 00:53:54.898961 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.898968 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:54.898975 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:54.898986 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:54.913535 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:54.913552 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:54.975130 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:54.975140 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:54.975150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:55.037117 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:55.037139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:55.067838 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:55.067855 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.627174 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:57.637082 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:57.637153 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:57.661527 1261197 cri.go:89] found id: ""
	I1217 00:53:57.661541 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.661548 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:57.661553 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:57.661611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:57.685175 1261197 cri.go:89] found id: ""
	I1217 00:53:57.685189 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.685200 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:57.685205 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:57.685263 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:57.711702 1261197 cri.go:89] found id: ""
	I1217 00:53:57.711717 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.711724 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:57.711729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:57.711868 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:57.740036 1261197 cri.go:89] found id: ""
	I1217 00:53:57.740050 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.740058 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:57.740063 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:57.740122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:57.768675 1261197 cri.go:89] found id: ""
	I1217 00:53:57.768697 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.768704 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:57.768710 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:57.768775 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:57.792870 1261197 cri.go:89] found id: ""
	I1217 00:53:57.792883 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.792890 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:57.792895 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:57.792965 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:57.817001 1261197 cri.go:89] found id: ""
	I1217 00:53:57.817015 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.817022 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:57.817031 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:57.817053 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.871861 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:57.871881 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:57.886738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:57.886755 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:57.949301 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:57.949319 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:57.949329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:58.010230 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:58.010249 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.540430 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:00.550751 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:00.550814 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:00.576488 1261197 cri.go:89] found id: ""
	I1217 00:54:00.576501 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.576510 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:00.576515 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:00.576573 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:00.601369 1261197 cri.go:89] found id: ""
	I1217 00:54:00.601383 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.601396 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:00.601401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:00.601459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:00.625632 1261197 cri.go:89] found id: ""
	I1217 00:54:00.625667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.625675 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:00.625680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:00.625738 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:00.651689 1261197 cri.go:89] found id: ""
	I1217 00:54:00.651703 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.651710 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:00.651715 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:00.651777 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:00.679744 1261197 cri.go:89] found id: ""
	I1217 00:54:00.679757 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.679765 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:00.679770 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:00.679828 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:00.709559 1261197 cri.go:89] found id: ""
	I1217 00:54:00.709573 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.709580 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:00.709585 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:00.709662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:00.734417 1261197 cri.go:89] found id: ""
	I1217 00:54:00.734432 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.734439 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:00.734447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:00.734457 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.797638 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.797675 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:00.797685 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:00.859579 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.859598 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.885766 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:00.885783 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:00.946324 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:00.946344 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.461934 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:03.472673 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:03.472733 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:03.496966 1261197 cri.go:89] found id: ""
	I1217 00:54:03.496980 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.496987 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:03.496992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:03.497048 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:03.522192 1261197 cri.go:89] found id: ""
	I1217 00:54:03.522207 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.522214 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:03.522219 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:03.522280 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:03.547069 1261197 cri.go:89] found id: ""
	I1217 00:54:03.547083 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.547090 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:03.547095 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:03.547175 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:03.572136 1261197 cri.go:89] found id: ""
	I1217 00:54:03.572149 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.572156 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:03.572162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:03.572234 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:03.600755 1261197 cri.go:89] found id: ""
	I1217 00:54:03.600770 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.600782 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:03.600788 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:03.600859 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:03.629818 1261197 cri.go:89] found id: ""
	I1217 00:54:03.629836 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.629843 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:03.629849 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:03.629905 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:03.656769 1261197 cri.go:89] found id: ""
	I1217 00:54:03.656783 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.656790 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:03.656797 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:03.656807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:03.712292 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:03.712313 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.727502 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:03.727518 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:03.791668 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:03.791678 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:03.791688 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:03.854180 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:03.854200 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:06.381966 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:06.393097 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:06.393156 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:06.429087 1261197 cri.go:89] found id: ""
	I1217 00:54:06.429101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.429108 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:06.429113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:06.429189 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:06.454075 1261197 cri.go:89] found id: ""
	I1217 00:54:06.454091 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.454101 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:06.454106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:06.454179 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:06.478067 1261197 cri.go:89] found id: ""
	I1217 00:54:06.478081 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.478088 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:06.478093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:06.478149 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:06.503508 1261197 cri.go:89] found id: ""
	I1217 00:54:06.503522 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.503529 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:06.503534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:06.503592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:06.532125 1261197 cri.go:89] found id: ""
	I1217 00:54:06.532139 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.532146 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:06.532151 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:06.532218 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:06.557383 1261197 cri.go:89] found id: ""
	I1217 00:54:06.557397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.557404 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:06.557409 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:06.557482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:06.583086 1261197 cri.go:89] found id: ""
	I1217 00:54:06.583101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.583109 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:06.583117 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:06.583128 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:06.638133 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:06.638153 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:06.652420 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:06.652439 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:06.715679 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:06.715692 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:06.715703 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:06.783529 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:06.783557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.314587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:09.324947 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:09.325009 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:09.349922 1261197 cri.go:89] found id: ""
	I1217 00:54:09.349945 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.349952 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:09.349957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:09.350025 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:09.381538 1261197 cri.go:89] found id: ""
	I1217 00:54:09.381552 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.381560 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:09.381565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:09.381627 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:09.412584 1261197 cri.go:89] found id: ""
	I1217 00:54:09.412606 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.412613 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:09.412621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:09.412696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:09.446518 1261197 cri.go:89] found id: ""
	I1217 00:54:09.446533 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.446541 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:09.446547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:09.446620 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:09.477943 1261197 cri.go:89] found id: ""
	I1217 00:54:09.477956 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.477963 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:09.477968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:09.478027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:09.503386 1261197 cri.go:89] found id: ""
	I1217 00:54:09.503400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.503407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:09.503413 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:09.503476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:09.528266 1261197 cri.go:89] found id: ""
	I1217 00:54:09.528292 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.528300 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:09.528308 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:09.528318 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:09.590766 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:09.590786 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.618540 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:09.618556 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:09.675017 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:09.675037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:09.689541 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:09.689557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:09.753013 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.253253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:12.263867 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:12.263926 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:12.289871 1261197 cri.go:89] found id: ""
	I1217 00:54:12.289888 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.289904 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:12.289910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:12.289975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:12.316441 1261197 cri.go:89] found id: ""
	I1217 00:54:12.316455 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.316462 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:12.316467 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:12.316527 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:12.340348 1261197 cri.go:89] found id: ""
	I1217 00:54:12.340362 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.340370 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:12.340375 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:12.340432 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:12.364082 1261197 cri.go:89] found id: ""
	I1217 00:54:12.364097 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.364104 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:12.364109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:12.364167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:12.390849 1261197 cri.go:89] found id: ""
	I1217 00:54:12.390863 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.390870 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:12.390875 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:12.390933 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:12.420430 1261197 cri.go:89] found id: ""
	I1217 00:54:12.420444 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.420451 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:12.420456 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:12.420518 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:12.448205 1261197 cri.go:89] found id: ""
	I1217 00:54:12.448221 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.448228 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:12.448236 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:12.448247 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:12.504931 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:12.504952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:12.519968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:12.519985 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:12.584010 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.584021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:12.584032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:12.647102 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:12.647123 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:15.176013 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:15.186921 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:15.186985 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:15.215197 1261197 cri.go:89] found id: ""
	I1217 00:54:15.215211 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.215218 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:15.215226 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:15.215284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:15.240116 1261197 cri.go:89] found id: ""
	I1217 00:54:15.240130 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.240137 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:15.240142 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:15.240201 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:15.267788 1261197 cri.go:89] found id: ""
	I1217 00:54:15.267802 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.267809 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:15.267814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:15.267871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:15.291699 1261197 cri.go:89] found id: ""
	I1217 00:54:15.291713 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.291720 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:15.291725 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:15.291782 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:15.315522 1261197 cri.go:89] found id: ""
	I1217 00:54:15.315536 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.315542 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:15.315548 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:15.315609 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:15.340325 1261197 cri.go:89] found id: ""
	I1217 00:54:15.340339 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.340346 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:15.340361 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:15.340423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:15.369889 1261197 cri.go:89] found id: ""
	I1217 00:54:15.369917 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.369924 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:15.369932 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:15.369942 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:15.428658 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:15.428679 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:15.444080 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:15.444099 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:15.512831 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:15.512843 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:15.512861 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:15.578043 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:15.578063 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.110567 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:18.120744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:18.120802 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:18.145094 1261197 cri.go:89] found id: ""
	I1217 00:54:18.145108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.145116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:18.145122 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:18.145185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:18.169518 1261197 cri.go:89] found id: ""
	I1217 00:54:18.169532 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.169542 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:18.169547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:18.169607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:18.194342 1261197 cri.go:89] found id: ""
	I1217 00:54:18.194356 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.194363 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:18.194369 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:18.194427 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:18.222931 1261197 cri.go:89] found id: ""
	I1217 00:54:18.222944 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.222952 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:18.222957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:18.223015 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:18.246707 1261197 cri.go:89] found id: ""
	I1217 00:54:18.246721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.246728 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:18.246734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:18.246792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:18.276152 1261197 cri.go:89] found id: ""
	I1217 00:54:18.276172 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.276180 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:18.276185 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:18.276250 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:18.300697 1261197 cri.go:89] found id: ""
	I1217 00:54:18.300711 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.300718 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:18.300725 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:18.300735 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:18.365628 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:18.365661 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:18.365671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:18.437541 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:18.437560 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.465122 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:18.465138 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:18.522977 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:18.522997 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.040317 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:21.050538 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:21.050601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:21.074720 1261197 cri.go:89] found id: ""
	I1217 00:54:21.074734 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.074741 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:21.074746 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:21.074808 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:21.099388 1261197 cri.go:89] found id: ""
	I1217 00:54:21.099402 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.099409 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:21.099414 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:21.099471 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:21.123589 1261197 cri.go:89] found id: ""
	I1217 00:54:21.123603 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.123616 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:21.123621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:21.123680 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:21.149246 1261197 cri.go:89] found id: ""
	I1217 00:54:21.149260 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.149267 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:21.149272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:21.149330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:21.175795 1261197 cri.go:89] found id: ""
	I1217 00:54:21.175809 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.175815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:21.175821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:21.175878 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:21.200104 1261197 cri.go:89] found id: ""
	I1217 00:54:21.200118 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.200125 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:21.200131 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:21.200191 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:21.228601 1261197 cri.go:89] found id: ""
	I1217 00:54:21.228615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.228622 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:21.228630 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:21.228642 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:21.285141 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:21.285160 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.300538 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:21.300554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:21.368570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:21.368590 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:21.368601 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:21.438594 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:21.438613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:23.967152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:23.977246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:23.977330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:24.002158 1261197 cri.go:89] found id: ""
	I1217 00:54:24.002175 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.002183 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:24.002189 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:24.002297 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:24.034702 1261197 cri.go:89] found id: ""
	I1217 00:54:24.034716 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.034723 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:24.034728 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:24.034788 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:24.059383 1261197 cri.go:89] found id: ""
	I1217 00:54:24.059397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.059404 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:24.059410 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:24.059466 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:24.088018 1261197 cri.go:89] found id: ""
	I1217 00:54:24.088032 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.088039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:24.088044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:24.088101 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:24.112493 1261197 cri.go:89] found id: ""
	I1217 00:54:24.112507 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.112514 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:24.112519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:24.112575 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:24.139798 1261197 cri.go:89] found id: ""
	I1217 00:54:24.139813 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.139819 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:24.139825 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:24.139886 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:24.164994 1261197 cri.go:89] found id: ""
	I1217 00:54:24.165008 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.165015 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:24.165022 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:24.165032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:24.224418 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:24.224438 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:24.239090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:24.239107 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:24.307181 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:24.307192 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:24.307203 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:24.369600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:24.369620 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:26.910110 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:26.920271 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:26.920343 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:26.947883 1261197 cri.go:89] found id: ""
	I1217 00:54:26.947897 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.947908 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:26.947913 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:26.947987 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:26.973290 1261197 cri.go:89] found id: ""
	I1217 00:54:26.973304 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.973312 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:26.973318 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:26.973377 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:26.997246 1261197 cri.go:89] found id: ""
	I1217 00:54:26.997261 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.997268 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:26.997272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:26.997328 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:27.023408 1261197 cri.go:89] found id: ""
	I1217 00:54:27.023422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.023429 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:27.023434 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:27.023494 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:27.051626 1261197 cri.go:89] found id: ""
	I1217 00:54:27.051640 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.051648 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:27.051653 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:27.051713 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:27.076431 1261197 cri.go:89] found id: ""
	I1217 00:54:27.076445 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.076452 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:27.076458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:27.076522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:27.101707 1261197 cri.go:89] found id: ""
	I1217 00:54:27.101721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.101728 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:27.101738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:27.101748 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:27.168764 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:27.168785 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:27.168797 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:27.233485 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:27.233505 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:27.269682 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:27.269699 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:27.328866 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:27.328887 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:29.845088 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:29.855320 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:29.855384 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:29.880133 1261197 cri.go:89] found id: ""
	I1217 00:54:29.880147 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.880156 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:29.880162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:29.880233 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:29.905055 1261197 cri.go:89] found id: ""
	I1217 00:54:29.905070 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.905078 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:29.905083 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:29.905141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:29.931379 1261197 cri.go:89] found id: ""
	I1217 00:54:29.931393 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.931400 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:29.931404 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:29.931465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:29.957268 1261197 cri.go:89] found id: ""
	I1217 00:54:29.957283 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.957290 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:29.957296 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:29.957360 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:29.982289 1261197 cri.go:89] found id: ""
	I1217 00:54:29.982303 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.982311 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:29.982316 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:29.982375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:30.024866 1261197 cri.go:89] found id: ""
	I1217 00:54:30.024883 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.024891 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:30.024898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:30.024973 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:30.071833 1261197 cri.go:89] found id: ""
	I1217 00:54:30.071852 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.071861 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:30.071877 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:30.071891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:30.147472 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:30.147484 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:30.147497 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:30.211213 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:30.211235 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:30.240355 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:30.240371 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:30.299743 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:30.299761 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:32.815023 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:32.824966 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:32.825040 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:32.849786 1261197 cri.go:89] found id: ""
	I1217 00:54:32.849799 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.849806 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:32.849812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:32.849875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:32.875478 1261197 cri.go:89] found id: ""
	I1217 00:54:32.875491 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.875498 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:32.875503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:32.875563 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:32.899514 1261197 cri.go:89] found id: ""
	I1217 00:54:32.899528 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.899534 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:32.899539 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:32.899601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:32.923962 1261197 cri.go:89] found id: ""
	I1217 00:54:32.923977 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.923984 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:32.923990 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:32.924067 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:32.948671 1261197 cri.go:89] found id: ""
	I1217 00:54:32.948685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.948692 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:32.948697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:32.948753 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:32.973420 1261197 cri.go:89] found id: ""
	I1217 00:54:32.973434 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.973440 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:32.973446 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:32.973505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:32.997981 1261197 cri.go:89] found id: ""
	I1217 00:54:32.997996 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.998003 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:32.998010 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:32.998020 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:33.055157 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:33.055177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:33.070286 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:33.070306 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:33.136931 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:33.136941 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:33.136952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:33.199432 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:33.199453 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:35.728077 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:35.738194 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:35.738256 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:35.763154 1261197 cri.go:89] found id: ""
	I1217 00:54:35.763169 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.763176 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:35.763182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:35.763238 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:35.787668 1261197 cri.go:89] found id: ""
	I1217 00:54:35.787682 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.787689 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:35.787695 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:35.787751 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:35.811854 1261197 cri.go:89] found id: ""
	I1217 00:54:35.811868 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.811884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:35.811890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:35.811961 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:35.836579 1261197 cri.go:89] found id: ""
	I1217 00:54:35.836594 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.836601 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:35.836607 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:35.836684 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:35.861837 1261197 cri.go:89] found id: ""
	I1217 00:54:35.861851 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.861858 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:35.861863 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:35.861921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:35.886709 1261197 cri.go:89] found id: ""
	I1217 00:54:35.886723 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.886730 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:35.886736 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:35.886792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:35.910235 1261197 cri.go:89] found id: ""
	I1217 00:54:35.910248 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.910255 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:35.910275 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:35.910285 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:35.966535 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:35.966553 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:35.981143 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:35.981169 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:36.045220 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:36.045231 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:36.045241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:36.106277 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:36.106296 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:38.637781 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:38.649664 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:38.649725 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:38.691238 1261197 cri.go:89] found id: ""
	I1217 00:54:38.691252 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.691259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:38.691264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:38.691322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:38.716035 1261197 cri.go:89] found id: ""
	I1217 00:54:38.716049 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.716055 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:38.716066 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:38.716125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:38.740603 1261197 cri.go:89] found id: ""
	I1217 00:54:38.740616 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.740624 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:38.740629 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:38.740687 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:38.766239 1261197 cri.go:89] found id: ""
	I1217 00:54:38.766253 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.766260 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:38.766266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:38.766324 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:38.791492 1261197 cri.go:89] found id: ""
	I1217 00:54:38.791506 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.791513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:38.791519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:38.791579 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:38.816435 1261197 cri.go:89] found id: ""
	I1217 00:54:38.816449 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.816456 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:38.816461 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:38.816520 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:38.841085 1261197 cri.go:89] found id: ""
	I1217 00:54:38.841099 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.841107 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:38.841114 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:38.841124 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:38.896837 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:38.896856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:38.911640 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:38.911658 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:38.976373 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:38.976383 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:38.976393 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:39.037751 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:39.037771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.567032 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:41.578116 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:41.578182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:41.603748 1261197 cri.go:89] found id: ""
	I1217 00:54:41.603762 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.603770 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:41.603775 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:41.603833 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:41.634998 1261197 cri.go:89] found id: ""
	I1217 00:54:41.635012 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.635019 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:41.635024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:41.635080 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:41.678283 1261197 cri.go:89] found id: ""
	I1217 00:54:41.678297 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.678307 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:41.678312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:41.678375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:41.704945 1261197 cri.go:89] found id: ""
	I1217 00:54:41.704960 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.704967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:41.704977 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:41.705035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:41.729909 1261197 cri.go:89] found id: ""
	I1217 00:54:41.729923 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.729930 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:41.729936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:41.730019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:41.754648 1261197 cri.go:89] found id: ""
	I1217 00:54:41.754662 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.754669 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:41.754675 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:41.754734 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:41.779433 1261197 cri.go:89] found id: ""
	I1217 00:54:41.779448 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.779455 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:41.779463 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:41.779474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:41.793989 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:41.794006 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:41.858584 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:41.858594 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:41.858605 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:41.923655 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:41.923682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.950619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:41.950638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:44.507762 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:44.517733 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:44.517793 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:44.541892 1261197 cri.go:89] found id: ""
	I1217 00:54:44.541905 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.541924 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:44.541929 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:44.541986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:44.570803 1261197 cri.go:89] found id: ""
	I1217 00:54:44.570818 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.570824 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:44.570830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:44.570889 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:44.599324 1261197 cri.go:89] found id: ""
	I1217 00:54:44.599338 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.599345 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:44.599351 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:44.599412 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:44.632615 1261197 cri.go:89] found id: ""
	I1217 00:54:44.632629 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.632637 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:44.632643 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:44.632705 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:44.659976 1261197 cri.go:89] found id: ""
	I1217 00:54:44.659989 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.660009 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:44.660015 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:44.660085 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:44.688987 1261197 cri.go:89] found id: ""
	I1217 00:54:44.689000 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.689007 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:44.689013 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:44.689069 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:44.712988 1261197 cri.go:89] found id: ""
	I1217 00:54:44.713002 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.713010 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:44.713018 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:44.713030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:44.727473 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:44.727489 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:44.794008 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:44.794021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:44.794031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:44.855600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:44.855621 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:44.883007 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:44.883023 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.442293 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:47.452401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:47.452465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:47.475940 1261197 cri.go:89] found id: ""
	I1217 00:54:47.475953 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.475960 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:47.475965 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:47.476021 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:47.500287 1261197 cri.go:89] found id: ""
	I1217 00:54:47.500302 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.500309 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:47.500314 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:47.500371 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:47.537066 1261197 cri.go:89] found id: ""
	I1217 00:54:47.537080 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.537087 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:47.537091 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:47.537147 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:47.561363 1261197 cri.go:89] found id: ""
	I1217 00:54:47.561377 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.561384 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:47.561390 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:47.561446 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:47.586917 1261197 cri.go:89] found id: ""
	I1217 00:54:47.586931 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.586939 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:47.586944 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:47.587006 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:47.611775 1261197 cri.go:89] found id: ""
	I1217 00:54:47.611789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.611796 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:47.611805 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:47.611862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:47.648123 1261197 cri.go:89] found id: ""
	I1217 00:54:47.648137 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.648145 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:47.648152 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:47.648163 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.716428 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:47.716447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:47.732842 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:47.732876 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:47.801539 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:47.801549 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:47.801559 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:47.863256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:47.863276 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:50.394435 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:50.404927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:50.404986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:50.429607 1261197 cri.go:89] found id: ""
	I1217 00:54:50.429621 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.429628 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:50.429634 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:50.429731 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:50.454601 1261197 cri.go:89] found id: ""
	I1217 00:54:50.454615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.454622 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:50.454627 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:50.454689 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:50.484855 1261197 cri.go:89] found id: ""
	I1217 00:54:50.484877 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.484884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:50.484890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:50.484950 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:50.510003 1261197 cri.go:89] found id: ""
	I1217 00:54:50.510018 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.510025 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:50.510030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:50.510089 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:50.533511 1261197 cri.go:89] found id: ""
	I1217 00:54:50.533525 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.533532 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:50.533537 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:50.533602 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:50.558386 1261197 cri.go:89] found id: ""
	I1217 00:54:50.558400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.558407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:50.558419 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:50.558476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:50.587409 1261197 cri.go:89] found id: ""
	I1217 00:54:50.587422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.587429 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:50.587437 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:50.587447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:50.644042 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:50.644061 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:50.661242 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:50.661257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:50.732592 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:50.732602 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:50.732613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:50.793447 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:50.793466 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:53.322439 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:53.332470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:53.332535 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:53.357094 1261197 cri.go:89] found id: ""
	I1217 00:54:53.357108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.357116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:53.357121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:53.357182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:53.381629 1261197 cri.go:89] found id: ""
	I1217 00:54:53.381667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.381674 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:53.381679 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:53.381743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:53.407630 1261197 cri.go:89] found id: ""
	I1217 00:54:53.407644 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.407651 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:53.407656 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:53.407718 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:53.435972 1261197 cri.go:89] found id: ""
	I1217 00:54:53.435986 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.435993 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:53.435999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:53.436059 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:53.461545 1261197 cri.go:89] found id: ""
	I1217 00:54:53.461558 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.461565 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:53.461570 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:53.461629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:53.491744 1261197 cri.go:89] found id: ""
	I1217 00:54:53.491758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.491766 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:53.491771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:53.491836 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:53.517147 1261197 cri.go:89] found id: ""
	I1217 00:54:53.517161 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.517170 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:53.517177 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:53.517188 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:53.573158 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:53.573177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:53.588088 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:53.588104 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:53.665911 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:53.665933 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:53.665945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:53.735506 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:53.735530 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.268624 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:56.279995 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:56.280060 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:56.304847 1261197 cri.go:89] found id: ""
	I1217 00:54:56.304874 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.304881 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:56.304887 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:56.304952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:56.329820 1261197 cri.go:89] found id: ""
	I1217 00:54:56.329834 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.329841 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:56.329846 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:56.329902 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:56.354667 1261197 cri.go:89] found id: ""
	I1217 00:54:56.354685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.354695 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:56.354700 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:56.354779 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:56.383823 1261197 cri.go:89] found id: ""
	I1217 00:54:56.383837 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.383844 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:56.383850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:56.383907 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:56.408219 1261197 cri.go:89] found id: ""
	I1217 00:54:56.408233 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.408240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:56.408246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:56.408305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:56.433745 1261197 cri.go:89] found id: ""
	I1217 00:54:56.433758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.433765 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:56.433771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:56.433843 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:56.458631 1261197 cri.go:89] found id: ""
	I1217 00:54:56.458645 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.458653 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:56.458660 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:56.458671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:56.473217 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:56.473233 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:56.540570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:56.540579 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:56.540591 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:56.605775 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:56.605795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.659436 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:56.659452 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.225973 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:59.236165 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:59.236223 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:59.262172 1261197 cri.go:89] found id: ""
	I1217 00:54:59.262185 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.262193 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:59.262198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:59.262254 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:59.286403 1261197 cri.go:89] found id: ""
	I1217 00:54:59.286417 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.286425 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:59.286430 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:59.286489 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:59.311254 1261197 cri.go:89] found id: ""
	I1217 00:54:59.311268 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.311276 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:59.311280 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:59.311336 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:59.339495 1261197 cri.go:89] found id: ""
	I1217 00:54:59.339510 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.339519 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:59.339524 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:59.339583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:59.364038 1261197 cri.go:89] found id: ""
	I1217 00:54:59.364052 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.364068 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:59.364074 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:59.364130 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:59.388359 1261197 cri.go:89] found id: ""
	I1217 00:54:59.388373 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.388391 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:59.388396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:59.388462 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:59.412775 1261197 cri.go:89] found id: ""
	I1217 00:54:59.412789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.412806 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:59.412815 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:59.412824 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:59.475190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:59.475211 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:59.504917 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:59.504933 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.561462 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:59.561481 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:59.576156 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:59.576171 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:59.641179 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.141436 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:02.152012 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:02.152075 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:02.180949 1261197 cri.go:89] found id: ""
	I1217 00:55:02.180963 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.180970 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:02.180976 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:02.181046 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:02.204892 1261197 cri.go:89] found id: ""
	I1217 00:55:02.204915 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.204922 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:02.204928 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:02.205035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:02.230226 1261197 cri.go:89] found id: ""
	I1217 00:55:02.230239 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.230247 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:02.230252 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:02.230309 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:02.254922 1261197 cri.go:89] found id: ""
	I1217 00:55:02.254936 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.254944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:02.254949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:02.255012 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:02.279652 1261197 cri.go:89] found id: ""
	I1217 00:55:02.279666 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.279673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:02.279678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:02.279737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:02.306126 1261197 cri.go:89] found id: ""
	I1217 00:55:02.306139 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.306146 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:02.306152 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:02.306209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:02.330968 1261197 cri.go:89] found id: ""
	I1217 00:55:02.330982 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.330989 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:02.330997 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:02.331007 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:02.386453 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:02.386473 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:02.401019 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:02.401036 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:02.462681 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.462691 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:02.462701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:02.523460 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:02.523480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.051274 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:05.061850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:05.061924 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:05.087077 1261197 cri.go:89] found id: ""
	I1217 00:55:05.087092 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.087099 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:05.087105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:05.087167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:05.113592 1261197 cri.go:89] found id: ""
	I1217 00:55:05.113607 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.113614 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:05.113620 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:05.113702 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:05.139004 1261197 cri.go:89] found id: ""
	I1217 00:55:05.139019 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.139026 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:05.139031 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:05.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:05.163703 1261197 cri.go:89] found id: ""
	I1217 00:55:05.163717 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.163725 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:05.163731 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:05.163791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:05.188990 1261197 cri.go:89] found id: ""
	I1217 00:55:05.189004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.189011 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:05.189024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:05.189083 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:05.218147 1261197 cri.go:89] found id: ""
	I1217 00:55:05.218161 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.218168 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:05.218174 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:05.218246 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:05.242561 1261197 cri.go:89] found id: ""
	I1217 00:55:05.242575 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.242592 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:05.242600 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:05.242610 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:05.303683 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:05.303701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.331484 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:05.331499 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:05.392845 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:05.392868 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:05.407882 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:05.407898 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:05.474193 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:07.974416 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:07.984527 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:07.984588 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:08.011706 1261197 cri.go:89] found id: ""
	I1217 00:55:08.011722 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.011730 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:08.011735 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:08.011803 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:08.038984 1261197 cri.go:89] found id: ""
	I1217 00:55:08.038998 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.039005 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:08.039011 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:08.039072 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:08.066839 1261197 cri.go:89] found id: ""
	I1217 00:55:08.066854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.066861 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:08.066866 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:08.066928 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:08.096940 1261197 cri.go:89] found id: ""
	I1217 00:55:08.096954 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.096962 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:08.096968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:08.097026 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:08.124219 1261197 cri.go:89] found id: ""
	I1217 00:55:08.124232 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.124240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:08.124245 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:08.124308 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:08.149339 1261197 cri.go:89] found id: ""
	I1217 00:55:08.149353 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.149360 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:08.149365 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:08.149424 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:08.173327 1261197 cri.go:89] found id: ""
	I1217 00:55:08.173350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.173358 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:08.173366 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:08.173376 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:08.229871 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:08.229891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:08.244853 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:08.244877 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:08.312062 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:08.312072 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:08.312082 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:08.373219 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:08.373238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:10.901813 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:10.913062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:10.913131 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:10.939973 1261197 cri.go:89] found id: ""
	I1217 00:55:10.939987 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.939994 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:10.939999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:10.940057 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:10.965488 1261197 cri.go:89] found id: ""
	I1217 00:55:10.965502 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.965509 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:10.965514 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:10.965574 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:10.990743 1261197 cri.go:89] found id: ""
	I1217 00:55:10.990758 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.990766 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:10.990772 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:10.990851 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:11.017298 1261197 cri.go:89] found id: ""
	I1217 00:55:11.017322 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.017330 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:11.017336 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:11.017405 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:11.043148 1261197 cri.go:89] found id: ""
	I1217 00:55:11.043163 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.043170 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:11.043175 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:11.043236 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:11.070182 1261197 cri.go:89] found id: ""
	I1217 00:55:11.070196 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.070207 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:11.070213 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:11.070284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:11.098403 1261197 cri.go:89] found id: ""
	I1217 00:55:11.098419 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.098426 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:11.098434 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:11.098445 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:11.154712 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:11.154732 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:11.171447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:11.171469 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:11.235332 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:11.235344 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:11.235354 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:11.298591 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:11.298611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:13.826200 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:13.836246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:13.836303 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:13.861099 1261197 cri.go:89] found id: ""
	I1217 00:55:13.861113 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.861120 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:13.861125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:13.861183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:13.898315 1261197 cri.go:89] found id: ""
	I1217 00:55:13.898328 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.898335 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:13.898340 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:13.898403 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:13.927870 1261197 cri.go:89] found id: ""
	I1217 00:55:13.927884 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.927902 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:13.927908 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:13.927986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:13.956407 1261197 cri.go:89] found id: ""
	I1217 00:55:13.956421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.956428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:13.956433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:13.956500 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:13.981521 1261197 cri.go:89] found id: ""
	I1217 00:55:13.981553 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.981560 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:13.981565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:13.981630 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:14.007326 1261197 cri.go:89] found id: ""
	I1217 00:55:14.007350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.007358 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:14.007364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:14.007433 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:14.034794 1261197 cri.go:89] found id: ""
	I1217 00:55:14.034809 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.034816 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:14.034824 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:14.034835 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:14.091355 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:14.091375 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:14.106561 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:14.106579 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:14.176400 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:14.176410 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:14.176420 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:14.242568 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:14.242593 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:16.776330 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:16.786496 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:16.786558 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:16.811486 1261197 cri.go:89] found id: ""
	I1217 00:55:16.811500 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.811507 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:16.811512 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:16.811576 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:16.839885 1261197 cri.go:89] found id: ""
	I1217 00:55:16.839898 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.839905 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:16.839910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:16.839972 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:16.865332 1261197 cri.go:89] found id: ""
	I1217 00:55:16.865346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.865353 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:16.865359 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:16.865419 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:16.904044 1261197 cri.go:89] found id: ""
	I1217 00:55:16.904058 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.904065 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:16.904071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:16.904133 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:16.934495 1261197 cri.go:89] found id: ""
	I1217 00:55:16.934508 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.934515 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:16.934521 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:16.934582 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:16.959038 1261197 cri.go:89] found id: ""
	I1217 00:55:16.959052 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.959060 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:16.959065 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:16.959123 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:16.987609 1261197 cri.go:89] found id: ""
	I1217 00:55:16.987622 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.987630 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:16.987637 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:16.987647 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:17.046635 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:17.046655 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:17.062321 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:17.062345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:17.130440 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:17.130450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:17.130460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:17.192501 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:17.192521 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:19.724677 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:19.736386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:19.736459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:19.763100 1261197 cri.go:89] found id: ""
	I1217 00:55:19.763114 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.763121 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:19.763127 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:19.763185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:19.791470 1261197 cri.go:89] found id: ""
	I1217 00:55:19.791483 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.791490 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:19.791495 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:19.791552 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:19.816395 1261197 cri.go:89] found id: ""
	I1217 00:55:19.816410 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.816417 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:19.816422 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:19.816482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:19.841971 1261197 cri.go:89] found id: ""
	I1217 00:55:19.841984 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.841991 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:19.841997 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:19.842058 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:19.866385 1261197 cri.go:89] found id: ""
	I1217 00:55:19.866399 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.866406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:19.866411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:19.866468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:19.904121 1261197 cri.go:89] found id: ""
	I1217 00:55:19.904135 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.904153 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:19.904160 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:19.904217 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:19.940290 1261197 cri.go:89] found id: ""
	I1217 00:55:19.940304 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.940311 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:19.940319 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:19.940329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:19.955177 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:19.955193 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:20.024806 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:20.024817 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:20.024830 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:20.088972 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:20.088996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:20.122058 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:20.122075 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:22.679929 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:22.690102 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:22.690162 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:22.717462 1261197 cri.go:89] found id: ""
	I1217 00:55:22.717476 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.717483 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:22.717489 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:22.717550 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:22.744363 1261197 cri.go:89] found id: ""
	I1217 00:55:22.744377 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.744390 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:22.744395 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:22.744454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:22.770975 1261197 cri.go:89] found id: ""
	I1217 00:55:22.770989 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.770996 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:22.771001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:22.771068 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:22.795702 1261197 cri.go:89] found id: ""
	I1217 00:55:22.795716 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.795724 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:22.795729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:22.795787 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:22.820186 1261197 cri.go:89] found id: ""
	I1217 00:55:22.820200 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.820206 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:22.820212 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:22.820269 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:22.844518 1261197 cri.go:89] found id: ""
	I1217 00:55:22.844533 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.844540 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:22.844545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:22.844604 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:22.884821 1261197 cri.go:89] found id: ""
	I1217 00:55:22.884834 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.884841 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:22.884849 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:22.884860 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:22.901504 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:22.901520 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:22.975115 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:22.975125 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:22.975135 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:23.036546 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:23.036566 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:23.070681 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:23.070697 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.627462 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:25.638109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:25.638168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:25.671791 1261197 cri.go:89] found id: ""
	I1217 00:55:25.671806 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.671813 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:25.671821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:25.671884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:25.696990 1261197 cri.go:89] found id: ""
	I1217 00:55:25.697004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.697011 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:25.697016 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:25.697082 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:25.722087 1261197 cri.go:89] found id: ""
	I1217 00:55:25.722101 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.722110 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:25.722115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:25.722184 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:25.747407 1261197 cri.go:89] found id: ""
	I1217 00:55:25.747421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.747428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:25.747433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:25.747495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:25.772602 1261197 cri.go:89] found id: ""
	I1217 00:55:25.772617 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.772623 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:25.772628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:25.772694 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:25.802452 1261197 cri.go:89] found id: ""
	I1217 00:55:25.802466 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.802473 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:25.802478 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:25.802538 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:25.827066 1261197 cri.go:89] found id: ""
	I1217 00:55:25.827081 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.827088 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:25.827096 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:25.827109 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.886656 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:25.886676 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:25.903090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:25.903108 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:25.973568 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:25.973578 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:25.973587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:26.036642 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:26.036662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:28.571573 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:28.581543 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:28.581601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:28.607202 1261197 cri.go:89] found id: ""
	I1217 00:55:28.607216 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.607224 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:28.607229 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:28.607288 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:28.630842 1261197 cri.go:89] found id: ""
	I1217 00:55:28.630857 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.630864 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:28.630869 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:28.630927 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:28.656052 1261197 cri.go:89] found id: ""
	I1217 00:55:28.656066 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.656073 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:28.656079 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:28.656135 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:28.680008 1261197 cri.go:89] found id: ""
	I1217 00:55:28.680022 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.680029 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:28.680034 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:28.680104 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:28.704668 1261197 cri.go:89] found id: ""
	I1217 00:55:28.704682 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.704689 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:28.704694 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:28.704756 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:28.733961 1261197 cri.go:89] found id: ""
	I1217 00:55:28.733974 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.733981 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:28.733986 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:28.734042 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:28.759990 1261197 cri.go:89] found id: ""
	I1217 00:55:28.760005 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.760013 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:28.760021 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:28.760030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:28.815642 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:28.815661 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:28.830313 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:28.830333 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:28.907265 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:28.907287 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:28.907299 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:28.978223 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:28.978244 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:31.508374 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:31.518631 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:31.518696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:31.543672 1261197 cri.go:89] found id: ""
	I1217 00:55:31.543686 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.543693 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:31.543701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:31.543760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:31.568914 1261197 cri.go:89] found id: ""
	I1217 00:55:31.568929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.568944 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:31.568949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:31.569017 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:31.593432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.593453 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.593461 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:31.593466 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:31.593537 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:31.619217 1261197 cri.go:89] found id: ""
	I1217 00:55:31.619231 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.619238 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:31.619243 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:31.619299 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:31.647432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.647445 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.647453 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:31.647458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:31.647522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:31.675117 1261197 cri.go:89] found id: ""
	I1217 00:55:31.675130 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.675138 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:31.675143 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:31.675200 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:31.698973 1261197 cri.go:89] found id: ""
	I1217 00:55:31.698986 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.698993 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:31.699001 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:31.699010 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:31.754429 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:31.754447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:31.768968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:31.768984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:31.831791 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:31.831801 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:31.831811 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:31.900759 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:31.900777 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:34.429727 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:34.440562 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:34.440629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:34.465412 1261197 cri.go:89] found id: ""
	I1217 00:55:34.465425 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.465433 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:34.465438 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:34.465496 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:34.489937 1261197 cri.go:89] found id: ""
	I1217 00:55:34.489951 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.489978 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:34.489987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:34.490055 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:34.520581 1261197 cri.go:89] found id: ""
	I1217 00:55:34.520602 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.520610 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:34.520615 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:34.520682 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:34.547718 1261197 cri.go:89] found id: ""
	I1217 00:55:34.547732 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.547739 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:34.547744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:34.547806 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:34.572103 1261197 cri.go:89] found id: ""
	I1217 00:55:34.572116 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.572133 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:34.572138 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:34.572209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:34.600789 1261197 cri.go:89] found id: ""
	I1217 00:55:34.600819 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.600827 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:34.600832 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:34.600921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:34.627220 1261197 cri.go:89] found id: ""
	I1217 00:55:34.627234 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.627240 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:34.627248 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:34.627257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:34.682307 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:34.682327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:34.697255 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:34.697271 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:34.764504 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:34.764515 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:34.764525 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:34.826010 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:34.826029 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.353119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:37.363135 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:37.363198 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:37.387754 1261197 cri.go:89] found id: ""
	I1217 00:55:37.387773 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.387781 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:37.387787 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:37.387845 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:37.413391 1261197 cri.go:89] found id: ""
	I1217 00:55:37.413404 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.413411 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:37.413417 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:37.413474 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:37.439523 1261197 cri.go:89] found id: ""
	I1217 00:55:37.439537 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.439544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:37.439549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:37.439607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:37.469209 1261197 cri.go:89] found id: ""
	I1217 00:55:37.469223 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.469230 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:37.469235 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:37.469296 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:37.495794 1261197 cri.go:89] found id: ""
	I1217 00:55:37.495807 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.495814 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:37.495819 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:37.495875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:37.520612 1261197 cri.go:89] found id: ""
	I1217 00:55:37.520625 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.520642 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:37.520648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:37.520720 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:37.547269 1261197 cri.go:89] found id: ""
	I1217 00:55:37.547283 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.547290 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:37.547299 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:37.547308 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:37.608835 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:37.608856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.635364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:37.635383 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:37.694966 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:37.694984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:37.709746 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:37.709763 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:37.775515 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.277182 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:40.287332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:40.287393 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:40.315852 1261197 cri.go:89] found id: ""
	I1217 00:55:40.315866 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.315873 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:40.315879 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:40.315936 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:40.340196 1261197 cri.go:89] found id: ""
	I1217 00:55:40.340210 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.340217 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:40.340222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:40.340279 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:40.365794 1261197 cri.go:89] found id: ""
	I1217 00:55:40.365815 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.365823 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:40.365828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:40.365899 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:40.391466 1261197 cri.go:89] found id: ""
	I1217 00:55:40.391480 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.391488 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:40.391493 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:40.391553 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:40.420286 1261197 cri.go:89] found id: ""
	I1217 00:55:40.420300 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.420307 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:40.420312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:40.420373 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:40.449247 1261197 cri.go:89] found id: ""
	I1217 00:55:40.449261 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.449268 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:40.449274 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:40.449331 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:40.474951 1261197 cri.go:89] found id: ""
	I1217 00:55:40.474965 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.474972 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:40.474980 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:40.474990 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:40.540502 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.540513 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:40.540524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:40.602747 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:40.602766 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:40.629888 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:40.629904 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:40.686174 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:40.686191 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.201825 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:43.212126 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:43.212185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:43.237088 1261197 cri.go:89] found id: ""
	I1217 00:55:43.237109 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.237115 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:43.237121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:43.237183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:43.262148 1261197 cri.go:89] found id: ""
	I1217 00:55:43.262162 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.262177 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:43.262182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:43.262239 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:43.286264 1261197 cri.go:89] found id: ""
	I1217 00:55:43.286278 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.286285 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:43.286290 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:43.286346 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:43.310644 1261197 cri.go:89] found id: ""
	I1217 00:55:43.310657 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.310664 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:43.310670 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:43.310730 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:43.335131 1261197 cri.go:89] found id: ""
	I1217 00:55:43.335146 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.335153 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:43.335158 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:43.335220 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:43.364301 1261197 cri.go:89] found id: ""
	I1217 00:55:43.364315 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.364323 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:43.364331 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:43.364390 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:43.391204 1261197 cri.go:89] found id: ""
	I1217 00:55:43.391218 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.391225 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:43.391233 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:43.391252 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:43.450751 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:43.450771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.466709 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:43.466726 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:43.533713 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:43.533723 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:43.533734 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:43.601250 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:43.601269 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.134875 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:46.146399 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:46.146468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:46.179014 1261197 cri.go:89] found id: ""
	I1217 00:55:46.179028 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.179044 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:46.179050 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:46.179115 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:46.208346 1261197 cri.go:89] found id: ""
	I1217 00:55:46.208360 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.208377 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:46.208383 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:46.208441 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:46.233331 1261197 cri.go:89] found id: ""
	I1217 00:55:46.233346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.233361 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:46.233367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:46.233423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:46.259330 1261197 cri.go:89] found id: ""
	I1217 00:55:46.259344 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.259351 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:46.259357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:46.259413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:46.283871 1261197 cri.go:89] found id: ""
	I1217 00:55:46.283885 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.283902 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:46.283907 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:46.283975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:46.308301 1261197 cri.go:89] found id: ""
	I1217 00:55:46.308316 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.308331 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:46.308337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:46.308397 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:46.332677 1261197 cri.go:89] found id: ""
	I1217 00:55:46.332691 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.332699 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:46.332706 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:46.332716 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:46.347830 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:46.347846 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:46.413688 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:46.413699 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:46.413709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:46.475238 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:46.475260 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.502692 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:46.502708 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.063356 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:49.074298 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:49.074364 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:49.102541 1261197 cri.go:89] found id: ""
	I1217 00:55:49.102555 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.102562 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:49.102567 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:49.102625 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:49.132690 1261197 cri.go:89] found id: ""
	I1217 00:55:49.132706 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.132713 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:49.132718 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:49.132780 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:49.159962 1261197 cri.go:89] found id: ""
	I1217 00:55:49.159976 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.159983 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:49.159987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:49.160047 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:49.186672 1261197 cri.go:89] found id: ""
	I1217 00:55:49.186685 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.186692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:49.186703 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:49.186760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:49.215488 1261197 cri.go:89] found id: ""
	I1217 00:55:49.215506 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.215513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:49.215518 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:49.215594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:49.243652 1261197 cri.go:89] found id: ""
	I1217 00:55:49.243667 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.243674 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:49.243680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:49.243746 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:49.271745 1261197 cri.go:89] found id: ""
	I1217 00:55:49.271762 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.271769 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:49.271777 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:49.271789 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:49.305614 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:49.305638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.361396 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:49.361414 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:49.377081 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:49.377097 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:49.448394 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:49.448405 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:49.448416 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.014619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:52.025272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:52.025334 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:52.050179 1261197 cri.go:89] found id: ""
	I1217 00:55:52.050193 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.050201 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:52.050206 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:52.050267 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:52.075171 1261197 cri.go:89] found id: ""
	I1217 00:55:52.075186 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.075193 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:52.075198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:52.075258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:52.100730 1261197 cri.go:89] found id: ""
	I1217 00:55:52.100745 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.100752 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:52.100758 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:52.100819 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:52.139001 1261197 cri.go:89] found id: ""
	I1217 00:55:52.139016 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.139023 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:52.139028 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:52.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:52.167837 1261197 cri.go:89] found id: ""
	I1217 00:55:52.167854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.167861 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:52.167876 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:52.167939 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:52.195893 1261197 cri.go:89] found id: ""
	I1217 00:55:52.195907 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.195914 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:52.195919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:52.195986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:52.226474 1261197 cri.go:89] found id: ""
	I1217 00:55:52.226489 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.226496 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:52.226504 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:52.226514 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:52.283106 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:52.283125 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:52.298214 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:52.298230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:52.368183 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:52.368194 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:52.368205 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.430851 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:52.430873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:54.962672 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.972814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:54.972874 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:54.998554 1261197 cri.go:89] found id: ""
	I1217 00:55:54.998568 1261197 logs.go:282] 0 containers: []
	W1217 00:55:54.998575 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:54.998580 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:54.998640 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:55.027159 1261197 cri.go:89] found id: ""
	I1217 00:55:55.027174 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.027181 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:55.027187 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:55.027258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:55.057204 1261197 cri.go:89] found id: ""
	I1217 00:55:55.057219 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.057226 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:55.057241 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:55.057302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:55.082858 1261197 cri.go:89] found id: ""
	I1217 00:55:55.082872 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.082880 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:55.082885 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:55.082952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:55.108074 1261197 cri.go:89] found id: ""
	I1217 00:55:55.108088 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.108095 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:55.108100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:55.108168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:55.142170 1261197 cri.go:89] found id: ""
	I1217 00:55:55.142184 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.142204 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:55.142210 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:55.142277 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:55.174306 1261197 cri.go:89] found id: ""
	I1217 00:55:55.174333 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.174341 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:55.174349 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:55.174361 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:55.234605 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:55.234625 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:55.249756 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:55.249773 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:55.312439 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:55.312450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:55.312460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:55.373256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:55.373275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:57.900997 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.911464 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:57.911522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:57.936082 1261197 cri.go:89] found id: ""
	I1217 00:55:57.936096 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.936104 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:57.936115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:57.936172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:57.960175 1261197 cri.go:89] found id: ""
	I1217 00:55:57.960190 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.960197 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:57.960202 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:57.960266 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:57.986658 1261197 cri.go:89] found id: ""
	I1217 00:55:57.986671 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.986678 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:57.986684 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:57.986743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:58.012944 1261197 cri.go:89] found id: ""
	I1217 00:55:58.012959 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.012967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:58.012973 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:58.013035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:58.041226 1261197 cri.go:89] found id: ""
	I1217 00:55:58.041241 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.041248 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:58.041253 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:58.041319 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:58.066914 1261197 cri.go:89] found id: ""
	I1217 00:55:58.066929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.066937 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:58.066943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:58.067000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:58.090571 1261197 cri.go:89] found id: ""
	I1217 00:55:58.090586 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.090593 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:58.090601 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:58.090611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:58.161546 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:58.161556 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:58.161578 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:58.230111 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:58.230131 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:58.259134 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:58.259150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:58.315698 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:58.315715 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:00.831924 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.842106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:00.842166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:00.868037 1261197 cri.go:89] found id: ""
	I1217 00:56:00.868051 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.868057 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:00.868062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:00.868138 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:00.893020 1261197 cri.go:89] found id: ""
	I1217 00:56:00.893046 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.893053 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:00.893059 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:00.893125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:00.918054 1261197 cri.go:89] found id: ""
	I1217 00:56:00.918068 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.918075 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:00.918081 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:00.918139 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:00.947584 1261197 cri.go:89] found id: ""
	I1217 00:56:00.947599 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.947607 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:00.947612 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:00.947675 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:00.974913 1261197 cri.go:89] found id: ""
	I1217 00:56:00.974929 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.974936 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:00.974941 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:00.975000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:00.998262 1261197 cri.go:89] found id: ""
	I1217 00:56:00.998276 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.998284 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:00.998289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:00.998345 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:01.025055 1261197 cri.go:89] found id: ""
	I1217 00:56:01.025071 1261197 logs.go:282] 0 containers: []
	W1217 00:56:01.025079 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:01.025099 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:01.025110 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:01.080854 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:01.080873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:01.095680 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:01.095698 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:01.174559 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:01.174574 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:01.174587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:01.240953 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:01.240973 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:03.778460 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.788536 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:03.788601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:03.816065 1261197 cri.go:89] found id: ""
	I1217 00:56:03.816080 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.816087 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:03.816093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:03.816158 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:03.840359 1261197 cri.go:89] found id: ""
	I1217 00:56:03.840373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.840381 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:03.840386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:03.840443 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:03.865338 1261197 cri.go:89] found id: ""
	I1217 00:56:03.865351 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.865359 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:03.865364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:03.865421 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:03.889916 1261197 cri.go:89] found id: ""
	I1217 00:56:03.889930 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.889937 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:03.889943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:03.890011 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:03.913782 1261197 cri.go:89] found id: ""
	I1217 00:56:03.913796 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.913804 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:03.913815 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:03.913875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:03.938356 1261197 cri.go:89] found id: ""
	I1217 00:56:03.938371 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.938379 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:03.938385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:03.938447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:03.963432 1261197 cri.go:89] found id: ""
	I1217 00:56:03.963446 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.963454 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:03.963461 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:03.963474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:04.024730 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:04.024752 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:04.057316 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:04.057331 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:04.115813 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:04.115832 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:04.133889 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:04.133905 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:04.212782 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.713766 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.723767 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:06.723837 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:06.747547 1261197 cri.go:89] found id: ""
	I1217 00:56:06.747561 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.747568 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:06.747574 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:06.747632 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:06.772850 1261197 cri.go:89] found id: ""
	I1217 00:56:06.772864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.772871 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:06.772877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:06.772942 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:06.797087 1261197 cri.go:89] found id: ""
	I1217 00:56:06.797101 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.797108 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:06.797113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:06.797171 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:06.821815 1261197 cri.go:89] found id: ""
	I1217 00:56:06.821829 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.821836 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:06.821842 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:06.821906 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:06.850207 1261197 cri.go:89] found id: ""
	I1217 00:56:06.850221 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.850229 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:06.850234 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:06.850294 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:06.874139 1261197 cri.go:89] found id: ""
	I1217 00:56:06.874153 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.874160 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:06.874166 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:06.874224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:06.899438 1261197 cri.go:89] found id: ""
	I1217 00:56:06.899453 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.899461 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:06.899469 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:06.899480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:06.967530 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.967542 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:06.967554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:07.030281 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:07.030301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:07.062210 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:07.062226 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:07.121373 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:07.121391 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:09.638141 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.648301 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:09.648359 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:09.672936 1261197 cri.go:89] found id: ""
	I1217 00:56:09.672951 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.672959 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:09.672964 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:09.673022 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:09.697500 1261197 cri.go:89] found id: ""
	I1217 00:56:09.697513 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.697520 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:09.697526 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:09.697583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:09.723330 1261197 cri.go:89] found id: ""
	I1217 00:56:09.723344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.723352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:09.723360 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:09.723423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:09.747017 1261197 cri.go:89] found id: ""
	I1217 00:56:09.747032 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.747039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:09.747044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:09.747100 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:09.771652 1261197 cri.go:89] found id: ""
	I1217 00:56:09.771666 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.771673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:09.771678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:09.771737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:09.799785 1261197 cri.go:89] found id: ""
	I1217 00:56:09.799799 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.799807 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:09.799812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:09.799871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:09.827063 1261197 cri.go:89] found id: ""
	I1217 00:56:09.827077 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.827085 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:09.827093 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:09.827103 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:09.894392 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:09.894403 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:09.894413 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:09.955961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:09.955981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:09.982364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:09.982380 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:10.051689 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:10.051709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.568963 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.579001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:12.579065 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:12.603247 1261197 cri.go:89] found id: ""
	I1217 00:56:12.603261 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.603269 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:12.603275 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:12.603332 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:12.635591 1261197 cri.go:89] found id: ""
	I1217 00:56:12.635606 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.635612 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:12.635617 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:12.635676 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:12.659802 1261197 cri.go:89] found id: ""
	I1217 00:56:12.659817 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.659824 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:12.659830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:12.659887 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:12.684671 1261197 cri.go:89] found id: ""
	I1217 00:56:12.684684 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.684692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:12.684697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:12.684766 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:12.712570 1261197 cri.go:89] found id: ""
	I1217 00:56:12.712584 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.712606 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:12.712611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:12.712668 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:12.739330 1261197 cri.go:89] found id: ""
	I1217 00:56:12.739345 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.739353 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:12.739358 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:12.739416 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:12.767372 1261197 cri.go:89] found id: ""
	I1217 00:56:12.767386 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.767393 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:12.767401 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:12.767411 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:12.822789 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:12.822807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.839685 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:12.839702 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:12.916219 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:12.916230 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:12.916241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:12.977800 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:12.977820 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.507621 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.518177 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:15.518240 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:15.544777 1261197 cri.go:89] found id: ""
	I1217 00:56:15.544792 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.544800 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:15.544806 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:15.544864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:15.569420 1261197 cri.go:89] found id: ""
	I1217 00:56:15.569433 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.569441 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:15.569447 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:15.569505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:15.594329 1261197 cri.go:89] found id: ""
	I1217 00:56:15.594344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.594352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:15.594357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:15.594417 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:15.619820 1261197 cri.go:89] found id: ""
	I1217 00:56:15.619834 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.619842 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:15.619847 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:15.619911 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:15.645055 1261197 cri.go:89] found id: ""
	I1217 00:56:15.645076 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.645084 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:15.645090 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:15.645152 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:15.671575 1261197 cri.go:89] found id: ""
	I1217 00:56:15.671590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.671597 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:15.671602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:15.671667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:15.700941 1261197 cri.go:89] found id: ""
	I1217 00:56:15.700955 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.700963 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:15.700971 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:15.700980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.728886 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:15.728931 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:15.784718 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:15.784736 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:15.799312 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:15.799335 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:15.865192 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:15.865203 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:15.865214 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.428562 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.438711 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:18.438772 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:18.465045 1261197 cri.go:89] found id: ""
	I1217 00:56:18.465060 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.465067 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:18.465073 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:18.465132 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:18.490715 1261197 cri.go:89] found id: ""
	I1217 00:56:18.490728 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.490736 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:18.490741 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:18.490799 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:18.519522 1261197 cri.go:89] found id: ""
	I1217 00:56:18.519536 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.519544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:18.519549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:18.519611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:18.545098 1261197 cri.go:89] found id: ""
	I1217 00:56:18.545112 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.545119 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:18.545125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:18.545183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:18.570978 1261197 cri.go:89] found id: ""
	I1217 00:56:18.570993 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.571000 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:18.571005 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:18.571063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:18.594800 1261197 cri.go:89] found id: ""
	I1217 00:56:18.594814 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.594822 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:18.594828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:18.594884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:18.618575 1261197 cri.go:89] found id: ""
	I1217 00:56:18.618589 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.618597 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:18.618604 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:18.618613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.680474 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:18.680494 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:18.708635 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:18.708651 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:18.763927 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:18.763949 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:18.780209 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:18.780225 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:18.849998 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.351687 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.362159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:21.362230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:21.390614 1261197 cri.go:89] found id: ""
	I1217 00:56:21.390630 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.390637 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:21.390648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:21.390716 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:21.420609 1261197 cri.go:89] found id: ""
	I1217 00:56:21.420623 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.420630 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:21.420636 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:21.420703 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:21.446943 1261197 cri.go:89] found id: ""
	I1217 00:56:21.446957 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.446964 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:21.446970 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:21.447041 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:21.477813 1261197 cri.go:89] found id: ""
	I1217 00:56:21.477828 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.477835 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:21.477841 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:21.477901 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:21.504024 1261197 cri.go:89] found id: ""
	I1217 00:56:21.504058 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.504065 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:21.504071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:21.504150 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:21.534132 1261197 cri.go:89] found id: ""
	I1217 00:56:21.534146 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.534154 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:21.534159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:21.534222 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:21.558094 1261197 cri.go:89] found id: ""
	I1217 00:56:21.558113 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.558122 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:21.558130 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:21.558141 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:21.620436 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:21.620462 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:21.635283 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:21.635301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:21.698118 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.698128 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:21.698139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:21.760016 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:21.760037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.289952 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.300354 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:24.300457 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:24.324823 1261197 cri.go:89] found id: ""
	I1217 00:56:24.324838 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.324846 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:24.324852 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:24.324921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:24.349508 1261197 cri.go:89] found id: ""
	I1217 00:56:24.349522 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.349528 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:24.349534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:24.349592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:24.375701 1261197 cri.go:89] found id: ""
	I1217 00:56:24.375716 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.375723 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:24.375729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:24.375791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:24.412359 1261197 cri.go:89] found id: ""
	I1217 00:56:24.412373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.412380 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:24.412385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:24.412447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:24.440423 1261197 cri.go:89] found id: ""
	I1217 00:56:24.440437 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.440444 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:24.440450 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:24.440511 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:24.471294 1261197 cri.go:89] found id: ""
	I1217 00:56:24.471308 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.471316 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:24.471322 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:24.471391 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:24.496845 1261197 cri.go:89] found id: ""
	I1217 00:56:24.496859 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.496866 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:24.496874 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:24.496892 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.526610 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:24.526627 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:24.583266 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:24.583327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:24.598272 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:24.598288 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:24.660553 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:24.660563 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:24.660574 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:27.222739 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.232603 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:27.232662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:27.259034 1261197 cri.go:89] found id: ""
	I1217 00:56:27.259048 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.259056 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:27.259061 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:27.259122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:27.282406 1261197 cri.go:89] found id: ""
	I1217 00:56:27.282420 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.282427 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:27.282432 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:27.282490 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:27.306518 1261197 cri.go:89] found id: ""
	I1217 00:56:27.306532 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.306540 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:27.306545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:27.306603 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:27.335278 1261197 cri.go:89] found id: ""
	I1217 00:56:27.335292 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.335299 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:27.335305 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:27.335363 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:27.359793 1261197 cri.go:89] found id: ""
	I1217 00:56:27.359808 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.359815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:27.359829 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:27.359888 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:27.399251 1261197 cri.go:89] found id: ""
	I1217 00:56:27.399275 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.399283 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:27.399289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:27.399355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:27.426464 1261197 cri.go:89] found id: ""
	I1217 00:56:27.426477 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.426495 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:27.426503 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:27.426513 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:27.458980 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:27.458996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:27.514403 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:27.514424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:27.528951 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:27.528969 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:27.592165 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:27.592175 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:27.592187 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.157841 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.168783 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:30.168847 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:30.194237 1261197 cri.go:89] found id: ""
	I1217 00:56:30.194251 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.194259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:30.194264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:30.194329 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:30.220057 1261197 cri.go:89] found id: ""
	I1217 00:56:30.220072 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.220079 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:30.220084 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:30.220141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:30.244965 1261197 cri.go:89] found id: ""
	I1217 00:56:30.244980 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.244987 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:30.244992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:30.245051 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:30.269893 1261197 cri.go:89] found id: ""
	I1217 00:56:30.269907 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.269914 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:30.269919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:30.269976 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:30.294384 1261197 cri.go:89] found id: ""
	I1217 00:56:30.294398 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.294406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:30.294411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:30.294469 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:30.325240 1261197 cri.go:89] found id: ""
	I1217 00:56:30.325254 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.325261 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:30.325266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:30.325322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:30.349591 1261197 cri.go:89] found id: ""
	I1217 00:56:30.349604 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.349611 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:30.349619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:30.349629 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:30.409349 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:30.409368 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:30.426814 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:30.426833 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:30.497852 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:30.497861 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:30.497872 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.559124 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:30.559146 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.090237 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.100535 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:33.100594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:33.124070 1261197 cri.go:89] found id: ""
	I1217 00:56:33.124085 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.124092 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:33.124098 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:33.124155 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:33.148807 1261197 cri.go:89] found id: ""
	I1217 00:56:33.148821 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.148828 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:33.148833 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:33.148894 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:33.175576 1261197 cri.go:89] found id: ""
	I1217 00:56:33.175590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.175597 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:33.175602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:33.175660 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:33.200012 1261197 cri.go:89] found id: ""
	I1217 00:56:33.200026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.200033 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:33.200038 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:33.200095 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:33.224891 1261197 cri.go:89] found id: ""
	I1217 00:56:33.224921 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.224928 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:33.224933 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:33.225001 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:33.249021 1261197 cri.go:89] found id: ""
	I1217 00:56:33.249035 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.249043 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:33.249052 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:33.249108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:33.272696 1261197 cri.go:89] found id: ""
	I1217 00:56:33.272710 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.272717 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:33.272733 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:33.272743 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:33.333826 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:33.333848 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.363111 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:33.363134 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:33.426200 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:33.426219 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:33.444135 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:33.444152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:33.510910 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.011142 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.023140 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:36.023216 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:36.051599 1261197 cri.go:89] found id: ""
	I1217 00:56:36.051614 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.051622 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:36.051628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:36.051700 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:36.076217 1261197 cri.go:89] found id: ""
	I1217 00:56:36.076231 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.076239 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:36.076244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:36.076305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:36.104998 1261197 cri.go:89] found id: ""
	I1217 00:56:36.105026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.105034 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:36.105039 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:36.105108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:36.130127 1261197 cri.go:89] found id: ""
	I1217 00:56:36.130142 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.130149 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:36.130154 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:36.130224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:36.155615 1261197 cri.go:89] found id: ""
	I1217 00:56:36.155629 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.155636 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:36.155648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:36.155709 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:36.181850 1261197 cri.go:89] found id: ""
	I1217 00:56:36.181864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.181872 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:36.181877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:36.181937 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:36.208111 1261197 cri.go:89] found id: ""
	I1217 00:56:36.208126 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.208133 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:36.208141 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:36.208152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:36.266007 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:36.266031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:36.281259 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:36.281275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:36.346325 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.346335 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:36.346345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:36.412961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:36.412981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:38.945107 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.955445 1261197 kubeadm.go:602] duration metric: took 4m3.371937848s to restartPrimaryControlPlane
	W1217 00:56:38.955509 1261197 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:56:38.955586 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 00:56:39.375604 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:56:39.388977 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:56:39.396884 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:56:39.396954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:56:39.404783 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:56:39.404792 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 00:56:39.404853 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:56:39.412686 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:56:39.412740 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:56:39.420350 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:56:39.427923 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:56:39.427975 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:56:39.435272 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.442721 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:56:39.442775 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.450389 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:56:39.458043 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:56:39.458098 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:56:39.465332 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:56:39.508240 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:56:39.508300 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:56:39.586995 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:56:39.587071 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:56:39.587116 1261197 kubeadm.go:319] OS: Linux
	I1217 00:56:39.587161 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:56:39.587217 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:56:39.587273 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:56:39.587330 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:56:39.587376 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:56:39.587433 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:56:39.587488 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:56:39.587544 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:56:39.587589 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:56:39.658303 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:56:39.658422 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:56:39.658518 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:56:39.670076 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:56:39.675448 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 00:56:39.675545 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:56:39.675618 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:56:39.675704 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:56:39.675774 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:56:39.675852 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:56:39.675914 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:56:39.675983 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:56:39.676053 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:56:39.676144 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:56:39.676224 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:56:39.676260 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:56:39.676329 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:56:39.801204 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:56:39.954898 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:56:40.065909 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:56:40.451062 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:56:40.596539 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:56:40.597062 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:56:40.600429 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:56:40.603602 1261197 out.go:252]   - Booting up control plane ...
	I1217 00:56:40.603714 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:56:40.603797 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:56:40.604963 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:56:40.625747 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:56:40.625851 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:56:40.633757 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:56:40.634255 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:56:40.634396 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:56:40.778162 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:56:40.778280 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:00:40.776324 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243331s
	I1217 01:00:40.776348 1261197 kubeadm.go:319] 
	I1217 01:00:40.776405 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:00:40.776437 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:00:40.776540 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:00:40.776544 1261197 kubeadm.go:319] 
	I1217 01:00:40.776648 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:00:40.776679 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:00:40.776709 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:00:40.776712 1261197 kubeadm.go:319] 
	I1217 01:00:40.780629 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:00:40.781051 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:00:40.781158 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:00:40.781394 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:00:40.781398 1261197 kubeadm.go:319] 
	I1217 01:00:40.781466 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:00:40.781578 1261197 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243331s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:00:40.781696 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:00:41.195061 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:00:41.209438 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:00:41.209493 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:00:41.218235 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:00:41.218244 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 01:00:41.218300 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:00:41.226394 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:00:41.226448 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:00:41.234445 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:00:41.242558 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:00:41.242613 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:00:41.250526 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.258573 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:00:41.258634 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.266278 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:00:41.274420 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:00:41.274476 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:00:41.281748 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:00:41.319491 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:00:41.319792 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:00:41.392691 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:00:41.392755 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:00:41.392789 1261197 kubeadm.go:319] OS: Linux
	I1217 01:00:41.392833 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:00:41.392880 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:00:41.392926 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:00:41.392972 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:00:41.393025 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:00:41.393072 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:00:41.393116 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:00:41.393163 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:00:41.393208 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:00:41.471655 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:00:41.471787 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:00:41.471905 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:00:41.482138 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:00:41.485739 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 01:00:41.485837 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:00:41.485905 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:00:41.485986 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:00:41.486050 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:00:41.486123 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:00:41.486180 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:00:41.486253 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:00:41.486318 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:00:41.486396 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:00:41.486478 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:00:41.486522 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:00:41.486584 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:00:41.603323 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:00:41.901106 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:00:42.054265 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:00:42.414109 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:00:42.682518 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:00:42.683180 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:00:42.685848 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:00:42.689217 1261197 out.go:252]   - Booting up control plane ...
	I1217 01:00:42.689317 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:00:42.689401 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:00:42.689468 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:00:42.713083 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:00:42.713185 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:00:42.721813 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:00:42.722110 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:00:42.722158 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:00:42.862014 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:00:42.862133 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:04:42.862018 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284909s
	I1217 01:04:42.862056 1261197 kubeadm.go:319] 
	I1217 01:04:42.862124 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:04:42.862167 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:04:42.862279 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:04:42.862283 1261197 kubeadm.go:319] 
	I1217 01:04:42.862390 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:04:42.862421 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:04:42.862451 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:04:42.862457 1261197 kubeadm.go:319] 
	I1217 01:04:42.866725 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:04:42.867116 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:04:42.867218 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:04:42.867438 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:04:42.867443 1261197 kubeadm.go:319] 
	I1217 01:04:42.867507 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:04:42.867593 1261197 kubeadm.go:403] duration metric: took 12m7.31765155s to StartCluster
	I1217 01:04:42.867623 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:04:42.867685 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:04:42.892141 1261197 cri.go:89] found id: ""
	I1217 01:04:42.892155 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.892162 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:04:42.892167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:04:42.892231 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:04:42.916795 1261197 cri.go:89] found id: ""
	I1217 01:04:42.916809 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.916817 1261197 logs.go:284] No container was found matching "etcd"
	I1217 01:04:42.916822 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:04:42.916879 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:04:42.945762 1261197 cri.go:89] found id: ""
	I1217 01:04:42.945776 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.945783 1261197 logs.go:284] No container was found matching "coredns"
	I1217 01:04:42.945794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:04:42.945850 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:04:42.970080 1261197 cri.go:89] found id: ""
	I1217 01:04:42.970094 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.970100 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:04:42.970105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:04:42.970161 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:04:42.994293 1261197 cri.go:89] found id: ""
	I1217 01:04:42.994307 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.994314 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:04:42.994319 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:04:42.994375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:04:43.019856 1261197 cri.go:89] found id: ""
	I1217 01:04:43.019871 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.019879 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:04:43.019884 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:04:43.019980 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:04:43.044643 1261197 cri.go:89] found id: ""
	I1217 01:04:43.044657 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.044664 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 01:04:43.044672 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 01:04:43.044682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:04:43.100644 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 01:04:43.100662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:04:43.115507 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:04:43.115524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:04:43.206420 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:04:43.206430 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 01:04:43.206440 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:04:43.268190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 01:04:43.268210 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 01:04:43.298717 1261197 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:04:43.298758 1261197 out.go:285] * 
	W1217 01:04:43.298817 1261197 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.298838 1261197 out.go:285] * 
	W1217 01:04:43.301057 1261197 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:04:43.305981 1261197 out.go:203] 
	W1217 01:04:43.308777 1261197 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.308838 1261197 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:04:43.308858 1261197 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:04:43.311954 1261197 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243334749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243430323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243566916Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243654818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243723127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243793503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243862870Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243933976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244147632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244278333Z" level=info msg="Connect containerd service"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244760505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.246010456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.254958867Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255148908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255207460Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255280454Z" level=info msg="Start recovering state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.295825702Z" level=info msg="Start event monitor"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296048071Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296114033Z" level=info msg="Start streaming server"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296179503Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296236685Z" level=info msg="runtime interface starting up..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296301301Z" level=info msg="starting plugins..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296367492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296572451Z" level=info msg="containerd successfully booted in 0.086094s"
	Dec 17 00:52:34 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:44.546877   21080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:44.547362   21080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:44.549097   21080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:44.549426   21080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:44.550938   21080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:04:44 up  6:47,  0 user,  load average: 0.02, 0.15, 0.50
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:04:41 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:41 functional-608344 kubelet[20886]: E1217 01:04:41.668692   20886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:41 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:41 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:42 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 01:04:42 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:42 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:42 functional-608344 kubelet[20891]: E1217 01:04:42.419477   20891 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:42 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:42 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 01:04:43 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 kubelet[20959]: E1217 01:04:43.180856   20959 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 01:04:43 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 kubelet[20997]: E1217 01:04:43.935311   20997 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:44 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 01:04:44 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:44 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (391.558142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-608344 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-608344 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (63.643559ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-608344 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (294.065121ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-416001 image ls --format json --alsologtostderr                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls --format table --alsologtostderr                                                                                             │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ update-context │ functional-416001 update-context --alsologtostderr -v=2                                                                                                 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ image          │ functional-416001 image ls                                                                                                                              │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ delete         │ -p functional-416001                                                                                                                                    │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start          │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start          │ -p functional-608344 --alsologtostderr -v=8                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:46 UTC │                     │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add registry.k8s.io/pause:latest                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache add minikube-local-cache-test:functional-608344                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ functional-608344 cache delete minikube-local-cache-test:functional-608344                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl images                                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ cache          │ functional-608344 cache reload                                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh            │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ kubectl        │ functional-608344 kubectl -- --context functional-608344 get pods                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ start          │ -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:52:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:52:31.527617 1261197 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:31.527758 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527763 1261197 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:31.527767 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527997 1261197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:52:31.528338 1261197 out.go:368] Setting JSON to false
	I1217 00:52:31.529124 1261197 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23702,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:52:31.529179 1261197 start.go:143] virtualization:  
	I1217 00:52:31.532534 1261197 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:52:31.537145 1261197 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:52:31.537272 1261197 notify.go:221] Checking for updates...
	I1217 00:52:31.542910 1261197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:31.545800 1261197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:52:31.548609 1261197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:52:31.551556 1261197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:52:31.554346 1261197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:31.557970 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:31.558066 1261197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:31.587498 1261197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:52:31.587608 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.650823 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.641966313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.650910 1261197 docker.go:319] overlay module found
	I1217 00:52:31.653844 1261197 out.go:179] * Using the docker driver based on existing profile
	I1217 00:52:31.656662 1261197 start.go:309] selected driver: docker
	I1217 00:52:31.656669 1261197 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.656773 1261197 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:31.656888 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.710052 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.70077893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.710641 1261197 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:52:31.710676 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:31.710788 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:31.710847 1261197 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.713993 1261197 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:52:31.716755 1261197 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:52:31.719575 1261197 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:52:31.722367 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:31.722402 1261197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:52:31.722423 1261197 cache.go:65] Caching tarball of preloaded images
	I1217 00:52:31.722451 1261197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:52:31.722505 1261197 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:52:31.722513 1261197 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:52:31.722616 1261197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:52:31.740561 1261197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:52:31.740571 1261197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:52:31.740584 1261197 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:52:31.740613 1261197 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:52:31.740665 1261197 start.go:364] duration metric: took 37.006µs to acquireMachinesLock for "functional-608344"
	I1217 00:52:31.740682 1261197 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:52:31.740687 1261197 fix.go:54] fixHost starting: 
	I1217 00:52:31.740957 1261197 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:52:31.756910 1261197 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:52:31.756929 1261197 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:52:31.760018 1261197 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:52:31.760042 1261197 machine.go:94] provisionDockerMachine start ...
	I1217 00:52:31.760119 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.776640 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.776960 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.776966 1261197 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:52:31.905356 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:31.905370 1261197 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:52:31.905445 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.925834 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.926164 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.926177 1261197 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:52:32.067014 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:32.067088 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.084172 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:32.084485 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:32.084499 1261197 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:52:32.214216 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:52:32.214232 1261197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:52:32.214253 1261197 ubuntu.go:190] setting up certificates
	I1217 00:52:32.214268 1261197 provision.go:84] configureAuth start
	I1217 00:52:32.214325 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:32.232515 1261197 provision.go:143] copyHostCerts
	I1217 00:52:32.232580 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:52:32.232588 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:52:32.232671 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:52:32.232772 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:52:32.232776 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:52:32.232801 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:52:32.232878 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:52:32.232885 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:52:32.232913 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:52:32.232967 1261197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:52:32.616759 1261197 provision.go:177] copyRemoteCerts
	I1217 00:52:32.616824 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:52:32.616864 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.638193 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.737540 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:52:32.755258 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:52:32.772709 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:52:32.791423 1261197 provision.go:87] duration metric: took 577.141949ms to configureAuth
	I1217 00:52:32.791441 1261197 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:52:32.791635 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:32.791640 1261197 machine.go:97] duration metric: took 1.031594088s to provisionDockerMachine
	I1217 00:52:32.791646 1261197 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:52:32.791656 1261197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:52:32.791701 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:52:32.791750 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.809559 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.905557 1261197 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:52:32.908787 1261197 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:52:32.908827 1261197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:52:32.908837 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:52:32.908891 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:52:32.908975 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:52:32.909048 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:52:32.909089 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:52:32.916399 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:32.933317 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:52:32.950047 1261197 start.go:296] duration metric: took 158.386583ms for postStartSetup
	I1217 00:52:32.950118 1261197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:52:32.950170 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.968857 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.062653 1261197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:52:33.067278 1261197 fix.go:56] duration metric: took 1.32658398s for fixHost
	I1217 00:52:33.067294 1261197 start.go:83] releasing machines lock for "functional-608344", held for 1.326621929s
	I1217 00:52:33.067361 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:33.084000 1261197 ssh_runner.go:195] Run: cat /version.json
	I1217 00:52:33.084040 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.084288 1261197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:52:33.084348 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.108566 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.111371 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.289488 1261197 ssh_runner.go:195] Run: systemctl --version
	I1217 00:52:33.296034 1261197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:52:33.300233 1261197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:52:33.300292 1261197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:52:33.307943 1261197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:52:33.307957 1261197 start.go:496] detecting cgroup driver to use...
	I1217 00:52:33.307988 1261197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:52:33.308034 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:52:33.325973 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:52:33.341243 1261197 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:52:33.341313 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:52:33.357700 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:52:33.373469 1261197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:52:33.498827 1261197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:52:33.614529 1261197 docker.go:234] disabling docker service ...
	I1217 00:52:33.614598 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:52:33.629592 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:52:33.642692 1261197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:52:33.771770 1261197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:52:33.894226 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:52:33.907337 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:52:33.922634 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:52:33.932171 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:52:33.941438 1261197 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:52:33.941508 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:52:33.950063 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.958782 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:52:33.967078 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.975466 1261197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:52:33.983339 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:52:33.991895 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:52:34.000351 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:52:34.010891 1261197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:52:34.018879 1261197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:52:34.026594 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.150165 1261197 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:52:34.299897 1261197 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:52:34.299958 1261197 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:52:34.303895 1261197 start.go:564] Will wait 60s for crictl version
	I1217 00:52:34.303948 1261197 ssh_runner.go:195] Run: which crictl
	I1217 00:52:34.307381 1261197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:52:34.334814 1261197 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:52:34.334888 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.355644 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.381331 1261197 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:52:34.384165 1261197 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:52:34.399831 1261197 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:52:34.407243 1261197 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:52:34.410160 1261197 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:52:34.410312 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:34.410394 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.434882 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.434894 1261197 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:52:34.434955 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.460154 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.460166 1261197 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:52:34.460173 1261197 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:52:34.460276 1261197 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:52:34.460340 1261197 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:52:34.485418 1261197 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:52:34.485440 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:34.485447 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:34.485462 1261197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:52:34.485483 1261197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:52:34.485591 1261197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:52:34.485688 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:52:34.493475 1261197 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:52:34.493536 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:52:34.501738 1261197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:52:34.515117 1261197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:52:34.528350 1261197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1217 00:52:34.541325 1261197 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:52:34.545027 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.663222 1261197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:52:34.871198 1261197 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:52:34.871209 1261197 certs.go:195] generating shared ca certs ...
	I1217 00:52:34.871223 1261197 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:52:34.871350 1261197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:52:34.871405 1261197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:52:34.871411 1261197 certs.go:257] generating profile certs ...
	I1217 00:52:34.871503 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:52:34.871558 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:52:34.871595 1261197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:52:34.871710 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:52:34.871738 1261197 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:52:34.871746 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:52:34.871770 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:52:34.871791 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:52:34.871819 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:52:34.871867 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:34.872533 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:52:34.890674 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:52:34.908252 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:52:34.925752 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:52:34.942982 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:52:34.961072 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:52:34.978793 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:52:34.995794 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:52:35.016106 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:52:35.035474 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:52:35.054248 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:52:35.072025 1261197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:52:35.085836 1261197 ssh_runner.go:195] Run: openssl version
	I1217 00:52:35.092498 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.100138 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:52:35.107992 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111748 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111805 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.153206 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:52:35.161118 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.168560 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:52:35.176276 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180431 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180496 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.224274 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:52:35.231870 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.239209 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:52:35.246988 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250581 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250708 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.291833 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:52:35.299197 1261197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:52:35.302994 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:52:35.343876 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:52:35.384935 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:52:35.425945 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:52:35.468160 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:52:35.509040 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:52:35.549950 1261197 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:35.550030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:52:35.550101 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.575493 1261197 cri.go:89] found id: ""
	I1217 00:52:35.575551 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:52:35.583488 1261197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:52:35.583498 1261197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:52:35.583562 1261197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:52:35.590939 1261197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.591435 1261197 kubeconfig.go:125] found "functional-608344" server: "https://192.168.49.2:8441"
	I1217 00:52:35.592674 1261197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:52:35.600478 1261197 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:38:00.276726971 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:52:34.535031442 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:52:35.600490 1261197 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:52:35.600503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 00:52:35.600556 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.635394 1261197 cri.go:89] found id: ""
	I1217 00:52:35.635452 1261197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:52:35.655954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:52:35.664843 1261197 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 00:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 00:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 17 00:42 /etc/kubernetes/scheduler.conf
	
	I1217 00:52:35.664920 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:52:35.673926 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:52:35.681783 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.681837 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:52:35.689482 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.698370 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.698438 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.705988 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:52:35.714414 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.714484 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:52:35.722072 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:52:35.729848 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:35.776855 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.711300 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.926722 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.999232 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:37.047947 1261197 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:52:37.048019 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:37.548207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.048861 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.048206 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.548189 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.049366 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.548557 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.048152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.549106 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.048793 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.549138 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.049014 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.048840 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.048979 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.549120 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.049193 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.548932 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.048207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.548119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.048127 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.548295 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.049080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.548771 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.048210 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.548773 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.048258 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.549096 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.048188 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.049033 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.549038 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.048512 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.548619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.048253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.549044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.048294 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.548919 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.048218 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.548855 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.048880 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.548221 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.048194 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.548710 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.048613 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.548834 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.049119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.548167 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.048599 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.549080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.048587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.548846 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.048217 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.049020 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.548398 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.049097 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.548960 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.049065 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.548376 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.048388 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.548808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.048244 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.548239 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.049099 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.549083 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.549030 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.048350 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.548287 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.048923 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.048292 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.549092 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.048874 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.549144 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.048777 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.548153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.048868 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.548124 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.048936 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.048238 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.048954 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.548662 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.049044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.548942 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.048968 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.548787 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.048489 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.548243 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.549178 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.048993 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.548676 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.049104 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.048853 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.549118 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.048215 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.549153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.048154 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.549126 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.048949 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.048782 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.548760 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.048205 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.049183 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.548231 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.549031 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.048208 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.548852 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:37.048332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:37.048420 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:37.076924 1261197 cri.go:89] found id: ""
	I1217 00:53:37.076939 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.076947 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:37.076953 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:37.077010 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:37.103936 1261197 cri.go:89] found id: ""
	I1217 00:53:37.103950 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.103957 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:37.103962 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:37.104019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:37.134578 1261197 cri.go:89] found id: ""
	I1217 00:53:37.134592 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.134599 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:37.134605 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:37.134667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:37.162973 1261197 cri.go:89] found id: ""
	I1217 00:53:37.162986 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.162994 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:37.162999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:37.163063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:37.193768 1261197 cri.go:89] found id: ""
	I1217 00:53:37.193782 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.193789 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:37.193794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:37.193864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:37.217378 1261197 cri.go:89] found id: ""
	I1217 00:53:37.217391 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.217398 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:37.217403 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:37.217464 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:37.245938 1261197 cri.go:89] found id: ""
	I1217 00:53:37.245952 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.245959 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:37.245967 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:37.245977 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:37.303279 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:37.303297 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:37.317809 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:37.317826 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:37.378847 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:37.378858 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:37.378870 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:37.440776 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:37.440795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:39.970536 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:39.980652 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:39.980714 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:40.014928 1261197 cri.go:89] found id: ""
	I1217 00:53:40.014943 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.014950 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:40.014956 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:40.015027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:40.044249 1261197 cri.go:89] found id: ""
	I1217 00:53:40.044284 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.044292 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:40.044299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:40.044375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:40.071071 1261197 cri.go:89] found id: ""
	I1217 00:53:40.071086 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.071094 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:40.071100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:40.071166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:40.096922 1261197 cri.go:89] found id: ""
	I1217 00:53:40.096936 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.096944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:40.096950 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:40.097019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:40.126209 1261197 cri.go:89] found id: ""
	I1217 00:53:40.126223 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.126231 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:40.126237 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:40.126302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:40.166443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.166457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.166465 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:40.166470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:40.166532 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:40.194443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.194457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.194465 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:40.194472 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:40.194483 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:40.249960 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:40.249980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:40.264714 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:40.264730 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:40.334158 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:40.334168 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:40.334179 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:40.396176 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:40.396196 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:42.927525 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:42.939255 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:42.939317 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:42.967766 1261197 cri.go:89] found id: ""
	I1217 00:53:42.967780 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.967788 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:42.967793 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:42.967852 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:42.992216 1261197 cri.go:89] found id: ""
	I1217 00:53:42.992230 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.992238 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:42.992244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:42.992301 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:43.018174 1261197 cri.go:89] found id: ""
	I1217 00:53:43.018188 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.018196 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:43.018201 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:43.018260 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:43.043673 1261197 cri.go:89] found id: ""
	I1217 00:53:43.043687 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.043695 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:43.043701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:43.043763 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:43.067990 1261197 cri.go:89] found id: ""
	I1217 00:53:43.068005 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.068012 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:43.068017 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:43.068079 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:43.093908 1261197 cri.go:89] found id: ""
	I1217 00:53:43.093923 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.093930 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:43.093936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:43.093995 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:43.120199 1261197 cri.go:89] found id: ""
	I1217 00:53:43.120213 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.120220 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:43.120228 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:43.120238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:43.181971 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:43.181989 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:43.197524 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:43.197541 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:43.261336 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:43.261356 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:43.261366 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:43.322519 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:43.322538 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:45.852691 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:45.863769 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:45.863831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:45.888335 1261197 cri.go:89] found id: ""
	I1217 00:53:45.888350 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.888357 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:45.888363 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:45.888422 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:45.918194 1261197 cri.go:89] found id: ""
	I1217 00:53:45.918209 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.918216 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:45.918222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:45.918285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:45.943809 1261197 cri.go:89] found id: ""
	I1217 00:53:45.943824 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.943831 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:45.943836 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:45.943893 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:45.969167 1261197 cri.go:89] found id: ""
	I1217 00:53:45.969182 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.969189 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:45.969195 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:45.969261 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:45.995411 1261197 cri.go:89] found id: ""
	I1217 00:53:45.995425 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.995432 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:45.995437 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:45.995495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:46.025138 1261197 cri.go:89] found id: ""
	I1217 00:53:46.025153 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.025161 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:46.025167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:46.025230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:46.052563 1261197 cri.go:89] found id: ""
	I1217 00:53:46.052578 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.052585 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:46.052594 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:46.052604 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:46.110268 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:46.110286 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:46.128213 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:46.128230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:46.211985 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:46.212008 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:46.212018 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:46.274022 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:46.274041 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:48.809808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:48.820115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:48.820172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:48.846046 1261197 cri.go:89] found id: ""
	I1217 00:53:48.846062 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.846069 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:48.846075 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:48.846145 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:48.871706 1261197 cri.go:89] found id: ""
	I1217 00:53:48.871721 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.871728 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:48.871734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:48.871794 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:48.896325 1261197 cri.go:89] found id: ""
	I1217 00:53:48.896341 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.896348 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:48.896353 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:48.896413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:48.922321 1261197 cri.go:89] found id: ""
	I1217 00:53:48.922335 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.922342 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:48.922348 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:48.922406 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:48.951311 1261197 cri.go:89] found id: ""
	I1217 00:53:48.951325 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.951332 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:48.951337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:48.951395 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:48.976196 1261197 cri.go:89] found id: ""
	I1217 00:53:48.976211 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.976218 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:48.976224 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:48.976285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:49.005156 1261197 cri.go:89] found id: ""
	I1217 00:53:49.005173 1261197 logs.go:282] 0 containers: []
	W1217 00:53:49.005181 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:49.005190 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:49.005202 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:49.067318 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:49.067385 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:49.083407 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:49.083424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:49.159947 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:49.159958 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:49.159970 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:49.230934 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:49.230956 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:51.761379 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:51.771759 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:51.771821 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:51.796369 1261197 cri.go:89] found id: ""
	I1217 00:53:51.796384 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.796391 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:51.796396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:51.796454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:51.822318 1261197 cri.go:89] found id: ""
	I1217 00:53:51.822333 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.822340 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:51.822345 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:51.822409 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:51.847395 1261197 cri.go:89] found id: ""
	I1217 00:53:51.847409 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.847416 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:51.847421 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:51.847479 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:51.872529 1261197 cri.go:89] found id: ""
	I1217 00:53:51.872544 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.872552 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:51.872557 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:51.872619 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:51.900871 1261197 cri.go:89] found id: ""
	I1217 00:53:51.900885 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.900893 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:51.900898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:51.900967 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:51.928534 1261197 cri.go:89] found id: ""
	I1217 00:53:51.928548 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.928555 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:51.928560 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:51.928621 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:51.957597 1261197 cri.go:89] found id: ""
	I1217 00:53:51.957611 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.957619 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:51.957627 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:51.957636 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:52.016924 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:52.016945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:52.033440 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:52.033458 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:52.106352 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:52.106373 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:52.106384 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:52.173915 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:52.173934 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:54.703159 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:54.713797 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:54.713862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:54.739273 1261197 cri.go:89] found id: ""
	I1217 00:53:54.739287 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.739294 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:54.739299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:54.739355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:54.770340 1261197 cri.go:89] found id: ""
	I1217 00:53:54.770355 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.770362 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:54.770367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:54.770430 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:54.795583 1261197 cri.go:89] found id: ""
	I1217 00:53:54.795597 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.795604 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:54.795611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:54.795670 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:54.823673 1261197 cri.go:89] found id: ""
	I1217 00:53:54.823688 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.823696 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:54.823701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:54.823760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:54.849899 1261197 cri.go:89] found id: ""
	I1217 00:53:54.849913 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.849921 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:54.849927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:54.849986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:54.874746 1261197 cri.go:89] found id: ""
	I1217 00:53:54.874761 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.874767 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:54.874773 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:54.874831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:54.898944 1261197 cri.go:89] found id: ""
	I1217 00:53:54.898961 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.898968 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:54.898975 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:54.898986 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:54.913535 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:54.913552 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:54.975130 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:54.975140 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:54.975150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:55.037117 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:55.037139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:55.067838 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:55.067855 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.627174 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:57.637082 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:57.637153 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:57.661527 1261197 cri.go:89] found id: ""
	I1217 00:53:57.661541 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.661548 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:57.661553 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:57.661611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:57.685175 1261197 cri.go:89] found id: ""
	I1217 00:53:57.685189 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.685200 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:57.685205 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:57.685263 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:57.711702 1261197 cri.go:89] found id: ""
	I1217 00:53:57.711717 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.711724 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:57.711729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:57.711868 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:57.740036 1261197 cri.go:89] found id: ""
	I1217 00:53:57.740050 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.740058 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:57.740063 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:57.740122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:57.768675 1261197 cri.go:89] found id: ""
	I1217 00:53:57.768697 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.768704 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:57.768710 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:57.768775 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:57.792870 1261197 cri.go:89] found id: ""
	I1217 00:53:57.792883 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.792890 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:57.792895 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:57.792965 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:57.817001 1261197 cri.go:89] found id: ""
	I1217 00:53:57.817015 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.817022 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:57.817031 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:57.817053 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.871861 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:57.871881 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:57.886738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:57.886755 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:57.949301 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:57.949319 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:57.949329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:58.010230 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:58.010249 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.540430 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:00.550751 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:00.550814 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:00.576488 1261197 cri.go:89] found id: ""
	I1217 00:54:00.576501 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.576510 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:00.576515 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:00.576573 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:00.601369 1261197 cri.go:89] found id: ""
	I1217 00:54:00.601383 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.601396 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:00.601401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:00.601459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:00.625632 1261197 cri.go:89] found id: ""
	I1217 00:54:00.625667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.625675 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:00.625680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:00.625738 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:00.651689 1261197 cri.go:89] found id: ""
	I1217 00:54:00.651703 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.651710 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:00.651715 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:00.651777 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:00.679744 1261197 cri.go:89] found id: ""
	I1217 00:54:00.679757 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.679765 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:00.679770 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:00.679828 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:00.709559 1261197 cri.go:89] found id: ""
	I1217 00:54:00.709573 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.709580 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:00.709585 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:00.709662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:00.734417 1261197 cri.go:89] found id: ""
	I1217 00:54:00.734432 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.734439 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:00.734447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:00.734457 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.797638 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.797675 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:00.797685 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:00.859579 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.859598 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.885766 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:00.885783 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:00.946324 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:00.946344 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.461934 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:03.472673 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:03.472733 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:03.496966 1261197 cri.go:89] found id: ""
	I1217 00:54:03.496980 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.496987 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:03.496992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:03.497048 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:03.522192 1261197 cri.go:89] found id: ""
	I1217 00:54:03.522207 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.522214 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:03.522219 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:03.522280 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:03.547069 1261197 cri.go:89] found id: ""
	I1217 00:54:03.547083 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.547090 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:03.547095 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:03.547175 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:03.572136 1261197 cri.go:89] found id: ""
	I1217 00:54:03.572149 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.572156 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:03.572162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:03.572234 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:03.600755 1261197 cri.go:89] found id: ""
	I1217 00:54:03.600770 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.600782 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:03.600788 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:03.600859 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:03.629818 1261197 cri.go:89] found id: ""
	I1217 00:54:03.629836 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.629843 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:03.629849 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:03.629905 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:03.656769 1261197 cri.go:89] found id: ""
	I1217 00:54:03.656783 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.656790 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:03.656797 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:03.656807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:03.712292 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:03.712313 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.727502 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:03.727518 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:03.791668 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:03.791678 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:03.791688 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:03.854180 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:03.854200 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:06.381966 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:06.393097 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:06.393156 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:06.429087 1261197 cri.go:89] found id: ""
	I1217 00:54:06.429101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.429108 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:06.429113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:06.429189 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:06.454075 1261197 cri.go:89] found id: ""
	I1217 00:54:06.454091 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.454101 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:06.454106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:06.454179 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:06.478067 1261197 cri.go:89] found id: ""
	I1217 00:54:06.478081 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.478088 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:06.478093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:06.478149 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:06.503508 1261197 cri.go:89] found id: ""
	I1217 00:54:06.503522 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.503529 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:06.503534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:06.503592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:06.532125 1261197 cri.go:89] found id: ""
	I1217 00:54:06.532139 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.532146 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:06.532151 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:06.532218 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:06.557383 1261197 cri.go:89] found id: ""
	I1217 00:54:06.557397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.557404 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:06.557409 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:06.557482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:06.583086 1261197 cri.go:89] found id: ""
	I1217 00:54:06.583101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.583109 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:06.583117 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:06.583128 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:06.638133 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:06.638153 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:06.652420 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:06.652439 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:06.715679 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:06.715692 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:06.715703 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:06.783529 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:06.783557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.314587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:09.324947 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:09.325009 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:09.349922 1261197 cri.go:89] found id: ""
	I1217 00:54:09.349945 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.349952 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:09.349957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:09.350025 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:09.381538 1261197 cri.go:89] found id: ""
	I1217 00:54:09.381552 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.381560 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:09.381565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:09.381627 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:09.412584 1261197 cri.go:89] found id: ""
	I1217 00:54:09.412606 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.412613 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:09.412621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:09.412696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:09.446518 1261197 cri.go:89] found id: ""
	I1217 00:54:09.446533 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.446541 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:09.446547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:09.446620 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:09.477943 1261197 cri.go:89] found id: ""
	I1217 00:54:09.477956 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.477963 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:09.477968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:09.478027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:09.503386 1261197 cri.go:89] found id: ""
	I1217 00:54:09.503400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.503407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:09.503413 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:09.503476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:09.528266 1261197 cri.go:89] found id: ""
	I1217 00:54:09.528292 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.528300 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:09.528308 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:09.528318 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:09.590766 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:09.590786 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.618540 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:09.618556 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:09.675017 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:09.675037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:09.689541 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:09.689557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:09.753013 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.253253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:12.263867 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:12.263926 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:12.289871 1261197 cri.go:89] found id: ""
	I1217 00:54:12.289888 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.289904 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:12.289910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:12.289975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:12.316441 1261197 cri.go:89] found id: ""
	I1217 00:54:12.316455 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.316462 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:12.316467 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:12.316527 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:12.340348 1261197 cri.go:89] found id: ""
	I1217 00:54:12.340362 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.340370 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:12.340375 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:12.340432 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:12.364082 1261197 cri.go:89] found id: ""
	I1217 00:54:12.364097 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.364104 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:12.364109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:12.364167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:12.390849 1261197 cri.go:89] found id: ""
	I1217 00:54:12.390863 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.390870 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:12.390875 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:12.390933 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:12.420430 1261197 cri.go:89] found id: ""
	I1217 00:54:12.420444 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.420451 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:12.420456 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:12.420518 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:12.448205 1261197 cri.go:89] found id: ""
	I1217 00:54:12.448221 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.448228 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:12.448236 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:12.448247 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:12.504931 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:12.504952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:12.519968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:12.519985 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:12.584010 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.584021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:12.584032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:12.647102 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:12.647123 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:15.176013 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:15.186921 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:15.186985 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:15.215197 1261197 cri.go:89] found id: ""
	I1217 00:54:15.215211 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.215218 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:15.215226 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:15.215284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:15.240116 1261197 cri.go:89] found id: ""
	I1217 00:54:15.240130 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.240137 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:15.240142 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:15.240201 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:15.267788 1261197 cri.go:89] found id: ""
	I1217 00:54:15.267802 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.267809 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:15.267814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:15.267871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:15.291699 1261197 cri.go:89] found id: ""
	I1217 00:54:15.291713 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.291720 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:15.291725 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:15.291782 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:15.315522 1261197 cri.go:89] found id: ""
	I1217 00:54:15.315536 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.315542 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:15.315548 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:15.315609 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:15.340325 1261197 cri.go:89] found id: ""
	I1217 00:54:15.340339 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.340346 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:15.340361 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:15.340423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:15.369889 1261197 cri.go:89] found id: ""
	I1217 00:54:15.369917 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.369924 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:15.369932 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:15.369942 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:15.428658 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:15.428679 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:15.444080 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:15.444099 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:15.512831 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:15.512843 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:15.512861 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:15.578043 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:15.578063 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.110567 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:18.120744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:18.120802 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:18.145094 1261197 cri.go:89] found id: ""
	I1217 00:54:18.145108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.145116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:18.145122 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:18.145185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:18.169518 1261197 cri.go:89] found id: ""
	I1217 00:54:18.169532 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.169542 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:18.169547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:18.169607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:18.194342 1261197 cri.go:89] found id: ""
	I1217 00:54:18.194356 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.194363 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:18.194369 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:18.194427 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:18.222931 1261197 cri.go:89] found id: ""
	I1217 00:54:18.222944 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.222952 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:18.222957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:18.223015 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:18.246707 1261197 cri.go:89] found id: ""
	I1217 00:54:18.246721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.246728 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:18.246734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:18.246792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:18.276152 1261197 cri.go:89] found id: ""
	I1217 00:54:18.276172 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.276180 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:18.276185 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:18.276250 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:18.300697 1261197 cri.go:89] found id: ""
	I1217 00:54:18.300711 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.300718 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:18.300725 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:18.300735 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:18.365628 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:18.365661 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:18.365671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:18.437541 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:18.437560 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.465122 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:18.465138 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:18.522977 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:18.522997 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.040317 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:21.050538 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:21.050601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:21.074720 1261197 cri.go:89] found id: ""
	I1217 00:54:21.074734 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.074741 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:21.074746 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:21.074808 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:21.099388 1261197 cri.go:89] found id: ""
	I1217 00:54:21.099402 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.099409 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:21.099414 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:21.099471 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:21.123589 1261197 cri.go:89] found id: ""
	I1217 00:54:21.123603 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.123616 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:21.123621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:21.123680 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:21.149246 1261197 cri.go:89] found id: ""
	I1217 00:54:21.149260 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.149267 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:21.149272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:21.149330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:21.175795 1261197 cri.go:89] found id: ""
	I1217 00:54:21.175809 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.175815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:21.175821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:21.175878 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:21.200104 1261197 cri.go:89] found id: ""
	I1217 00:54:21.200118 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.200125 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:21.200131 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:21.200191 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:21.228601 1261197 cri.go:89] found id: ""
	I1217 00:54:21.228615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.228622 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:21.228630 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:21.228642 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:21.285141 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:21.285160 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.300538 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:21.300554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:21.368570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:21.368590 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:21.368601 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:21.438594 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:21.438613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:23.967152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:23.977246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:23.977330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:24.002158 1261197 cri.go:89] found id: ""
	I1217 00:54:24.002175 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.002183 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:24.002189 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:24.002297 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:24.034702 1261197 cri.go:89] found id: ""
	I1217 00:54:24.034716 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.034723 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:24.034728 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:24.034788 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:24.059383 1261197 cri.go:89] found id: ""
	I1217 00:54:24.059397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.059404 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:24.059410 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:24.059466 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:24.088018 1261197 cri.go:89] found id: ""
	I1217 00:54:24.088032 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.088039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:24.088044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:24.088101 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:24.112493 1261197 cri.go:89] found id: ""
	I1217 00:54:24.112507 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.112514 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:24.112519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:24.112575 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:24.139798 1261197 cri.go:89] found id: ""
	I1217 00:54:24.139813 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.139819 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:24.139825 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:24.139886 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:24.164994 1261197 cri.go:89] found id: ""
	I1217 00:54:24.165008 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.165015 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:24.165022 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:24.165032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:24.224418 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:24.224438 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:24.239090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:24.239107 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:24.307181 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:24.307192 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:24.307203 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:24.369600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:24.369620 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:26.910110 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:26.920271 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:26.920343 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:26.947883 1261197 cri.go:89] found id: ""
	I1217 00:54:26.947897 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.947908 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:26.947913 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:26.947987 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:26.973290 1261197 cri.go:89] found id: ""
	I1217 00:54:26.973304 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.973312 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:26.973318 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:26.973377 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:26.997246 1261197 cri.go:89] found id: ""
	I1217 00:54:26.997261 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.997268 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:26.997272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:26.997328 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:27.023408 1261197 cri.go:89] found id: ""
	I1217 00:54:27.023422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.023429 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:27.023434 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:27.023494 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:27.051626 1261197 cri.go:89] found id: ""
	I1217 00:54:27.051640 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.051648 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:27.051653 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:27.051713 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:27.076431 1261197 cri.go:89] found id: ""
	I1217 00:54:27.076445 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.076452 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:27.076458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:27.076522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:27.101707 1261197 cri.go:89] found id: ""
	I1217 00:54:27.101721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.101728 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:27.101738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:27.101748 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:27.168764 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:27.168785 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:27.168797 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:27.233485 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:27.233505 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:27.269682 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:27.269699 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:27.328866 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:27.328887 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:29.845088 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:29.855320 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:29.855384 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:29.880133 1261197 cri.go:89] found id: ""
	I1217 00:54:29.880147 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.880156 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:29.880162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:29.880233 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:29.905055 1261197 cri.go:89] found id: ""
	I1217 00:54:29.905070 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.905078 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:29.905083 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:29.905141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:29.931379 1261197 cri.go:89] found id: ""
	I1217 00:54:29.931393 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.931400 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:29.931404 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:29.931465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:29.957268 1261197 cri.go:89] found id: ""
	I1217 00:54:29.957283 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.957290 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:29.957296 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:29.957360 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:29.982289 1261197 cri.go:89] found id: ""
	I1217 00:54:29.982303 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.982311 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:29.982316 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:29.982375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:30.024866 1261197 cri.go:89] found id: ""
	I1217 00:54:30.024883 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.024891 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:30.024898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:30.024973 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:30.071833 1261197 cri.go:89] found id: ""
	I1217 00:54:30.071852 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.071861 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:30.071877 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:30.071891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:30.147472 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:30.147484 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:30.147497 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:30.211213 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:30.211235 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:30.240355 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:30.240371 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:30.299743 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:30.299761 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:32.815023 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:32.824966 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:32.825040 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:32.849786 1261197 cri.go:89] found id: ""
	I1217 00:54:32.849799 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.849806 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:32.849812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:32.849875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:32.875478 1261197 cri.go:89] found id: ""
	I1217 00:54:32.875491 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.875498 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:32.875503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:32.875563 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:32.899514 1261197 cri.go:89] found id: ""
	I1217 00:54:32.899528 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.899534 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:32.899539 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:32.899601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:32.923962 1261197 cri.go:89] found id: ""
	I1217 00:54:32.923977 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.923984 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:32.923990 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:32.924067 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:32.948671 1261197 cri.go:89] found id: ""
	I1217 00:54:32.948685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.948692 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:32.948697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:32.948753 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:32.973420 1261197 cri.go:89] found id: ""
	I1217 00:54:32.973434 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.973440 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:32.973446 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:32.973505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:32.997981 1261197 cri.go:89] found id: ""
	I1217 00:54:32.997996 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.998003 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:32.998010 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:32.998020 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:33.055157 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:33.055177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:33.070286 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:33.070306 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:33.136931 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:33.136941 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:33.136952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:33.199432 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:33.199453 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:35.728077 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:35.738194 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:35.738256 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:35.763154 1261197 cri.go:89] found id: ""
	I1217 00:54:35.763169 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.763176 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:35.763182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:35.763238 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:35.787668 1261197 cri.go:89] found id: ""
	I1217 00:54:35.787682 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.787689 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:35.787695 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:35.787751 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:35.811854 1261197 cri.go:89] found id: ""
	I1217 00:54:35.811868 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.811884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:35.811890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:35.811961 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:35.836579 1261197 cri.go:89] found id: ""
	I1217 00:54:35.836594 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.836601 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:35.836607 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:35.836684 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:35.861837 1261197 cri.go:89] found id: ""
	I1217 00:54:35.861851 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.861858 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:35.861863 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:35.861921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:35.886709 1261197 cri.go:89] found id: ""
	I1217 00:54:35.886723 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.886730 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:35.886736 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:35.886792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:35.910235 1261197 cri.go:89] found id: ""
	I1217 00:54:35.910248 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.910255 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:35.910275 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:35.910285 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:35.966535 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:35.966553 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:35.981143 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:35.981169 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:36.045220 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:36.045231 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:36.045241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:36.106277 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:36.106296 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:38.637781 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:38.649664 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:38.649725 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:38.691238 1261197 cri.go:89] found id: ""
	I1217 00:54:38.691252 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.691259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:38.691264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:38.691322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:38.716035 1261197 cri.go:89] found id: ""
	I1217 00:54:38.716049 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.716055 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:38.716066 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:38.716125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:38.740603 1261197 cri.go:89] found id: ""
	I1217 00:54:38.740616 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.740624 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:38.740629 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:38.740687 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:38.766239 1261197 cri.go:89] found id: ""
	I1217 00:54:38.766253 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.766260 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:38.766266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:38.766324 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:38.791492 1261197 cri.go:89] found id: ""
	I1217 00:54:38.791506 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.791513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:38.791519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:38.791579 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:38.816435 1261197 cri.go:89] found id: ""
	I1217 00:54:38.816449 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.816456 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:38.816461 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:38.816520 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:38.841085 1261197 cri.go:89] found id: ""
	I1217 00:54:38.841099 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.841107 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:38.841114 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:38.841124 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:38.896837 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:38.896856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:38.911640 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:38.911658 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:38.976373 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:38.976383 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:38.976393 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:39.037751 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:39.037771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.567032 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:41.578116 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:41.578182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:41.603748 1261197 cri.go:89] found id: ""
	I1217 00:54:41.603762 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.603770 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:41.603775 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:41.603833 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:41.634998 1261197 cri.go:89] found id: ""
	I1217 00:54:41.635012 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.635019 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:41.635024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:41.635080 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:41.678283 1261197 cri.go:89] found id: ""
	I1217 00:54:41.678297 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.678307 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:41.678312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:41.678375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:41.704945 1261197 cri.go:89] found id: ""
	I1217 00:54:41.704960 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.704967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:41.704977 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:41.705035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:41.729909 1261197 cri.go:89] found id: ""
	I1217 00:54:41.729923 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.729930 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:41.729936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:41.730019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:41.754648 1261197 cri.go:89] found id: ""
	I1217 00:54:41.754662 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.754669 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:41.754675 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:41.754734 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:41.779433 1261197 cri.go:89] found id: ""
	I1217 00:54:41.779448 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.779455 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:41.779463 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:41.779474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:41.793989 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:41.794006 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:41.858584 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:41.858594 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:41.858605 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:41.923655 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:41.923682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.950619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:41.950638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:44.507762 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:44.517733 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:44.517793 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:44.541892 1261197 cri.go:89] found id: ""
	I1217 00:54:44.541905 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.541924 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:44.541929 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:44.541986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:44.570803 1261197 cri.go:89] found id: ""
	I1217 00:54:44.570818 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.570824 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:44.570830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:44.570889 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:44.599324 1261197 cri.go:89] found id: ""
	I1217 00:54:44.599338 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.599345 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:44.599351 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:44.599412 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:44.632615 1261197 cri.go:89] found id: ""
	I1217 00:54:44.632629 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.632637 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:44.632643 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:44.632705 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:44.659976 1261197 cri.go:89] found id: ""
	I1217 00:54:44.659989 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.660009 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:44.660015 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:44.660085 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:44.688987 1261197 cri.go:89] found id: ""
	I1217 00:54:44.689000 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.689007 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:44.689013 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:44.689069 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:44.712988 1261197 cri.go:89] found id: ""
	I1217 00:54:44.713002 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.713010 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:44.713018 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:44.713030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:44.727473 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:44.727489 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:44.794008 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:44.794021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:44.794031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:44.855600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:44.855621 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:44.883007 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:44.883023 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.442293 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:47.452401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:47.452465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:47.475940 1261197 cri.go:89] found id: ""
	I1217 00:54:47.475953 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.475960 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:47.475965 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:47.476021 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:47.500287 1261197 cri.go:89] found id: ""
	I1217 00:54:47.500302 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.500309 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:47.500314 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:47.500371 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:47.537066 1261197 cri.go:89] found id: ""
	I1217 00:54:47.537080 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.537087 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:47.537091 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:47.537147 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:47.561363 1261197 cri.go:89] found id: ""
	I1217 00:54:47.561377 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.561384 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:47.561390 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:47.561446 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:47.586917 1261197 cri.go:89] found id: ""
	I1217 00:54:47.586931 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.586939 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:47.586944 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:47.587006 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:47.611775 1261197 cri.go:89] found id: ""
	I1217 00:54:47.611789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.611796 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:47.611805 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:47.611862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:47.648123 1261197 cri.go:89] found id: ""
	I1217 00:54:47.648137 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.648145 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:47.648152 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:47.648163 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.716428 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:47.716447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:47.732842 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:47.732876 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:47.801539 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:47.801549 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:47.801559 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:47.863256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:47.863276 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:50.394435 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:50.404927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:50.404986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:50.429607 1261197 cri.go:89] found id: ""
	I1217 00:54:50.429621 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.429628 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:50.429634 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:50.429731 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:50.454601 1261197 cri.go:89] found id: ""
	I1217 00:54:50.454615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.454622 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:50.454627 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:50.454689 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:50.484855 1261197 cri.go:89] found id: ""
	I1217 00:54:50.484877 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.484884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:50.484890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:50.484950 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:50.510003 1261197 cri.go:89] found id: ""
	I1217 00:54:50.510018 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.510025 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:50.510030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:50.510089 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:50.533511 1261197 cri.go:89] found id: ""
	I1217 00:54:50.533525 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.533532 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:50.533537 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:50.533602 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:50.558386 1261197 cri.go:89] found id: ""
	I1217 00:54:50.558400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.558407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:50.558419 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:50.558476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:50.587409 1261197 cri.go:89] found id: ""
	I1217 00:54:50.587422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.587429 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:50.587437 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:50.587447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:50.644042 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:50.644061 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:50.661242 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:50.661257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:50.732592 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:50.732602 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:50.732613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:50.793447 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:50.793466 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:53.322439 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:53.332470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:53.332535 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:53.357094 1261197 cri.go:89] found id: ""
	I1217 00:54:53.357108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.357116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:53.357121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:53.357182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:53.381629 1261197 cri.go:89] found id: ""
	I1217 00:54:53.381667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.381674 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:53.381679 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:53.381743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:53.407630 1261197 cri.go:89] found id: ""
	I1217 00:54:53.407644 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.407651 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:53.407656 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:53.407718 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:53.435972 1261197 cri.go:89] found id: ""
	I1217 00:54:53.435986 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.435993 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:53.435999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:53.436059 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:53.461545 1261197 cri.go:89] found id: ""
	I1217 00:54:53.461558 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.461565 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:53.461570 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:53.461629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:53.491744 1261197 cri.go:89] found id: ""
	I1217 00:54:53.491758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.491766 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:53.491771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:53.491836 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:53.517147 1261197 cri.go:89] found id: ""
	I1217 00:54:53.517161 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.517170 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:53.517177 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:53.517188 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:53.573158 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:53.573177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:53.588088 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:53.588104 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:53.665911 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:53.665933 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:53.665945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:53.735506 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:53.735530 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.268624 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:56.279995 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:56.280060 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:56.304847 1261197 cri.go:89] found id: ""
	I1217 00:54:56.304874 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.304881 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:56.304887 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:56.304952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:56.329820 1261197 cri.go:89] found id: ""
	I1217 00:54:56.329834 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.329841 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:56.329846 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:56.329902 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:56.354667 1261197 cri.go:89] found id: ""
	I1217 00:54:56.354685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.354695 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:56.354700 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:56.354779 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:56.383823 1261197 cri.go:89] found id: ""
	I1217 00:54:56.383837 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.383844 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:56.383850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:56.383907 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:56.408219 1261197 cri.go:89] found id: ""
	I1217 00:54:56.408233 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.408240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:56.408246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:56.408305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:56.433745 1261197 cri.go:89] found id: ""
	I1217 00:54:56.433758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.433765 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:56.433771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:56.433843 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:56.458631 1261197 cri.go:89] found id: ""
	I1217 00:54:56.458645 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.458653 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:56.458660 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:56.458671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:56.473217 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:56.473233 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:56.540570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:56.540579 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:56.540591 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:56.605775 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:56.605795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.659436 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:56.659452 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.225973 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:59.236165 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:59.236223 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:59.262172 1261197 cri.go:89] found id: ""
	I1217 00:54:59.262185 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.262193 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:59.262198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:59.262254 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:59.286403 1261197 cri.go:89] found id: ""
	I1217 00:54:59.286417 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.286425 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:59.286430 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:59.286489 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:59.311254 1261197 cri.go:89] found id: ""
	I1217 00:54:59.311268 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.311276 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:59.311280 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:59.311336 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:59.339495 1261197 cri.go:89] found id: ""
	I1217 00:54:59.339510 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.339519 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:59.339524 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:59.339583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:59.364038 1261197 cri.go:89] found id: ""
	I1217 00:54:59.364052 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.364068 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:59.364074 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:59.364130 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:59.388359 1261197 cri.go:89] found id: ""
	I1217 00:54:59.388373 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.388391 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:59.388396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:59.388462 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:59.412775 1261197 cri.go:89] found id: ""
	I1217 00:54:59.412789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.412806 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:59.412815 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:59.412824 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:59.475190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:59.475211 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:59.504917 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:59.504933 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.561462 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:59.561481 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:59.576156 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:59.576171 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:59.641179 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.141436 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:02.152012 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:02.152075 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:02.180949 1261197 cri.go:89] found id: ""
	I1217 00:55:02.180963 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.180970 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:02.180976 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:02.181046 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:02.204892 1261197 cri.go:89] found id: ""
	I1217 00:55:02.204915 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.204922 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:02.204928 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:02.205035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:02.230226 1261197 cri.go:89] found id: ""
	I1217 00:55:02.230239 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.230247 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:02.230252 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:02.230309 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:02.254922 1261197 cri.go:89] found id: ""
	I1217 00:55:02.254936 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.254944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:02.254949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:02.255012 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:02.279652 1261197 cri.go:89] found id: ""
	I1217 00:55:02.279666 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.279673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:02.279678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:02.279737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:02.306126 1261197 cri.go:89] found id: ""
	I1217 00:55:02.306139 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.306146 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:02.306152 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:02.306209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:02.330968 1261197 cri.go:89] found id: ""
	I1217 00:55:02.330982 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.330989 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:02.330997 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:02.331007 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:02.386453 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:02.386473 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:02.401019 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:02.401036 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:02.462681 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.462691 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:02.462701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:02.523460 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:02.523480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.051274 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:05.061850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:05.061924 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:05.087077 1261197 cri.go:89] found id: ""
	I1217 00:55:05.087092 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.087099 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:05.087105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:05.087167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:05.113592 1261197 cri.go:89] found id: ""
	I1217 00:55:05.113607 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.113614 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:05.113620 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:05.113702 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:05.139004 1261197 cri.go:89] found id: ""
	I1217 00:55:05.139019 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.139026 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:05.139031 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:05.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:05.163703 1261197 cri.go:89] found id: ""
	I1217 00:55:05.163717 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.163725 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:05.163731 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:05.163791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:05.188990 1261197 cri.go:89] found id: ""
	I1217 00:55:05.189004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.189011 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:05.189024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:05.189083 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:05.218147 1261197 cri.go:89] found id: ""
	I1217 00:55:05.218161 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.218168 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:05.218174 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:05.218246 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:05.242561 1261197 cri.go:89] found id: ""
	I1217 00:55:05.242575 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.242592 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:05.242600 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:05.242610 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:05.303683 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:05.303701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.331484 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:05.331499 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:05.392845 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:05.392868 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:05.407882 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:05.407898 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:05.474193 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:07.974416 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:07.984527 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:07.984588 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:08.011706 1261197 cri.go:89] found id: ""
	I1217 00:55:08.011722 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.011730 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:08.011735 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:08.011803 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:08.038984 1261197 cri.go:89] found id: ""
	I1217 00:55:08.038998 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.039005 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:08.039011 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:08.039072 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:08.066839 1261197 cri.go:89] found id: ""
	I1217 00:55:08.066854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.066861 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:08.066866 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:08.066928 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:08.096940 1261197 cri.go:89] found id: ""
	I1217 00:55:08.096954 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.096962 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:08.096968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:08.097026 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:08.124219 1261197 cri.go:89] found id: ""
	I1217 00:55:08.124232 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.124240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:08.124245 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:08.124308 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:08.149339 1261197 cri.go:89] found id: ""
	I1217 00:55:08.149353 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.149360 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:08.149365 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:08.149424 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:08.173327 1261197 cri.go:89] found id: ""
	I1217 00:55:08.173350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.173358 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:08.173366 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:08.173376 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:08.229871 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:08.229891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:08.244853 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:08.244877 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:08.312062 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:08.312072 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:08.312082 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:08.373219 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:08.373238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:10.901813 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:10.913062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:10.913131 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:10.939973 1261197 cri.go:89] found id: ""
	I1217 00:55:10.939987 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.939994 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:10.939999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:10.940057 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:10.965488 1261197 cri.go:89] found id: ""
	I1217 00:55:10.965502 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.965509 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:10.965514 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:10.965574 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:10.990743 1261197 cri.go:89] found id: ""
	I1217 00:55:10.990758 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.990766 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:10.990772 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:10.990851 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:11.017298 1261197 cri.go:89] found id: ""
	I1217 00:55:11.017322 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.017330 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:11.017336 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:11.017405 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:11.043148 1261197 cri.go:89] found id: ""
	I1217 00:55:11.043163 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.043170 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:11.043175 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:11.043236 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:11.070182 1261197 cri.go:89] found id: ""
	I1217 00:55:11.070196 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.070207 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:11.070213 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:11.070284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:11.098403 1261197 cri.go:89] found id: ""
	I1217 00:55:11.098419 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.098426 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:11.098434 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:11.098445 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:11.154712 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:11.154732 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:11.171447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:11.171469 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:11.235332 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:11.235344 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:11.235354 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:11.298591 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:11.298611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:13.826200 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:13.836246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:13.836303 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:13.861099 1261197 cri.go:89] found id: ""
	I1217 00:55:13.861113 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.861120 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:13.861125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:13.861183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:13.898315 1261197 cri.go:89] found id: ""
	I1217 00:55:13.898328 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.898335 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:13.898340 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:13.898403 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:13.927870 1261197 cri.go:89] found id: ""
	I1217 00:55:13.927884 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.927902 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:13.927908 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:13.927986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:13.956407 1261197 cri.go:89] found id: ""
	I1217 00:55:13.956421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.956428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:13.956433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:13.956500 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:13.981521 1261197 cri.go:89] found id: ""
	I1217 00:55:13.981553 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.981560 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:13.981565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:13.981630 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:14.007326 1261197 cri.go:89] found id: ""
	I1217 00:55:14.007350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.007358 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:14.007364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:14.007433 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:14.034794 1261197 cri.go:89] found id: ""
	I1217 00:55:14.034809 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.034816 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:14.034824 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:14.034835 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:14.091355 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:14.091375 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:14.106561 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:14.106579 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:14.176400 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:14.176410 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:14.176420 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:14.242568 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:14.242593 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:16.776330 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:16.786496 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:16.786558 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:16.811486 1261197 cri.go:89] found id: ""
	I1217 00:55:16.811500 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.811507 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:16.811512 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:16.811576 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:16.839885 1261197 cri.go:89] found id: ""
	I1217 00:55:16.839898 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.839905 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:16.839910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:16.839972 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:16.865332 1261197 cri.go:89] found id: ""
	I1217 00:55:16.865346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.865353 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:16.865359 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:16.865419 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:16.904044 1261197 cri.go:89] found id: ""
	I1217 00:55:16.904058 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.904065 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:16.904071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:16.904133 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:16.934495 1261197 cri.go:89] found id: ""
	I1217 00:55:16.934508 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.934515 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:16.934521 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:16.934582 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:16.959038 1261197 cri.go:89] found id: ""
	I1217 00:55:16.959052 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.959060 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:16.959065 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:16.959123 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:16.987609 1261197 cri.go:89] found id: ""
	I1217 00:55:16.987622 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.987630 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:16.987637 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:16.987647 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:17.046635 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:17.046655 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:17.062321 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:17.062345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:17.130440 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:17.130450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:17.130460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:17.192501 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:17.192521 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:19.724677 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:19.736386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:19.736459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:19.763100 1261197 cri.go:89] found id: ""
	I1217 00:55:19.763114 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.763121 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:19.763127 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:19.763185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:19.791470 1261197 cri.go:89] found id: ""
	I1217 00:55:19.791483 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.791490 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:19.791495 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:19.791552 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:19.816395 1261197 cri.go:89] found id: ""
	I1217 00:55:19.816410 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.816417 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:19.816422 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:19.816482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:19.841971 1261197 cri.go:89] found id: ""
	I1217 00:55:19.841984 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.841991 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:19.841997 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:19.842058 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:19.866385 1261197 cri.go:89] found id: ""
	I1217 00:55:19.866399 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.866406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:19.866411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:19.866468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:19.904121 1261197 cri.go:89] found id: ""
	I1217 00:55:19.904135 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.904153 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:19.904160 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:19.904217 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:19.940290 1261197 cri.go:89] found id: ""
	I1217 00:55:19.940304 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.940311 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:19.940319 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:19.940329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:19.955177 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:19.955193 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:20.024806 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:20.024817 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:20.024830 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:20.088972 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:20.088996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:20.122058 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:20.122075 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:22.679929 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:22.690102 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:22.690162 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:22.717462 1261197 cri.go:89] found id: ""
	I1217 00:55:22.717476 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.717483 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:22.717489 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:22.717550 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:22.744363 1261197 cri.go:89] found id: ""
	I1217 00:55:22.744377 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.744390 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:22.744395 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:22.744454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:22.770975 1261197 cri.go:89] found id: ""
	I1217 00:55:22.770989 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.770996 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:22.771001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:22.771068 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:22.795702 1261197 cri.go:89] found id: ""
	I1217 00:55:22.795716 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.795724 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:22.795729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:22.795787 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:22.820186 1261197 cri.go:89] found id: ""
	I1217 00:55:22.820200 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.820206 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:22.820212 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:22.820269 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:22.844518 1261197 cri.go:89] found id: ""
	I1217 00:55:22.844533 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.844540 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:22.844545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:22.844604 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:22.884821 1261197 cri.go:89] found id: ""
	I1217 00:55:22.884834 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.884841 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:22.884849 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:22.884860 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:22.901504 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:22.901520 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:22.975115 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:22.975125 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:22.975135 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:23.036546 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:23.036566 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:23.070681 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:23.070697 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.627462 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:25.638109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:25.638168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:25.671791 1261197 cri.go:89] found id: ""
	I1217 00:55:25.671806 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.671813 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:25.671821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:25.671884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:25.696990 1261197 cri.go:89] found id: ""
	I1217 00:55:25.697004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.697011 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:25.697016 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:25.697082 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:25.722087 1261197 cri.go:89] found id: ""
	I1217 00:55:25.722101 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.722110 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:25.722115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:25.722184 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:25.747407 1261197 cri.go:89] found id: ""
	I1217 00:55:25.747421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.747428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:25.747433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:25.747495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:25.772602 1261197 cri.go:89] found id: ""
	I1217 00:55:25.772617 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.772623 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:25.772628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:25.772694 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:25.802452 1261197 cri.go:89] found id: ""
	I1217 00:55:25.802466 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.802473 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:25.802478 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:25.802538 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:25.827066 1261197 cri.go:89] found id: ""
	I1217 00:55:25.827081 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.827088 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:25.827096 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:25.827109 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.886656 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:25.886676 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:25.903090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:25.903108 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:25.973568 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:25.973578 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:25.973587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:26.036642 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:26.036662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:28.571573 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:28.581543 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:28.581601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:28.607202 1261197 cri.go:89] found id: ""
	I1217 00:55:28.607216 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.607224 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:28.607229 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:28.607288 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:28.630842 1261197 cri.go:89] found id: ""
	I1217 00:55:28.630857 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.630864 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:28.630869 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:28.630927 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:28.656052 1261197 cri.go:89] found id: ""
	I1217 00:55:28.656066 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.656073 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:28.656079 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:28.656135 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:28.680008 1261197 cri.go:89] found id: ""
	I1217 00:55:28.680022 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.680029 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:28.680034 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:28.680104 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:28.704668 1261197 cri.go:89] found id: ""
	I1217 00:55:28.704682 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.704689 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:28.704694 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:28.704756 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:28.733961 1261197 cri.go:89] found id: ""
	I1217 00:55:28.733974 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.733981 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:28.733986 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:28.734042 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:28.759990 1261197 cri.go:89] found id: ""
	I1217 00:55:28.760005 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.760013 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:28.760021 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:28.760030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:28.815642 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:28.815661 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:28.830313 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:28.830333 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:28.907265 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:28.907287 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:28.907299 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:28.978223 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:28.978244 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:31.508374 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:31.518631 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:31.518696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:31.543672 1261197 cri.go:89] found id: ""
	I1217 00:55:31.543686 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.543693 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:31.543701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:31.543760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:31.568914 1261197 cri.go:89] found id: ""
	I1217 00:55:31.568929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.568944 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:31.568949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:31.569017 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:31.593432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.593453 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.593461 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:31.593466 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:31.593537 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:31.619217 1261197 cri.go:89] found id: ""
	I1217 00:55:31.619231 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.619238 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:31.619243 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:31.619299 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:31.647432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.647445 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.647453 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:31.647458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:31.647522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:31.675117 1261197 cri.go:89] found id: ""
	I1217 00:55:31.675130 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.675138 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:31.675143 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:31.675200 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:31.698973 1261197 cri.go:89] found id: ""
	I1217 00:55:31.698986 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.698993 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:31.699001 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:31.699010 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:31.754429 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:31.754447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:31.768968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:31.768984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:31.831791 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:31.831801 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:31.831811 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:31.900759 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:31.900777 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:34.429727 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:34.440562 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:34.440629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:34.465412 1261197 cri.go:89] found id: ""
	I1217 00:55:34.465425 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.465433 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:34.465438 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:34.465496 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:34.489937 1261197 cri.go:89] found id: ""
	I1217 00:55:34.489951 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.489978 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:34.489987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:34.490055 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:34.520581 1261197 cri.go:89] found id: ""
	I1217 00:55:34.520602 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.520610 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:34.520615 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:34.520682 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:34.547718 1261197 cri.go:89] found id: ""
	I1217 00:55:34.547732 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.547739 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:34.547744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:34.547806 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:34.572103 1261197 cri.go:89] found id: ""
	I1217 00:55:34.572116 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.572133 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:34.572138 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:34.572209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:34.600789 1261197 cri.go:89] found id: ""
	I1217 00:55:34.600819 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.600827 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:34.600832 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:34.600921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:34.627220 1261197 cri.go:89] found id: ""
	I1217 00:55:34.627234 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.627240 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:34.627248 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:34.627257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:34.682307 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:34.682327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:34.697255 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:34.697271 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:34.764504 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:34.764515 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:34.764525 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:34.826010 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:34.826029 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.353119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:37.363135 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:37.363198 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:37.387754 1261197 cri.go:89] found id: ""
	I1217 00:55:37.387773 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.387781 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:37.387787 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:37.387845 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:37.413391 1261197 cri.go:89] found id: ""
	I1217 00:55:37.413404 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.413411 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:37.413417 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:37.413474 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:37.439523 1261197 cri.go:89] found id: ""
	I1217 00:55:37.439537 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.439544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:37.439549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:37.439607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:37.469209 1261197 cri.go:89] found id: ""
	I1217 00:55:37.469223 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.469230 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:37.469235 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:37.469296 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:37.495794 1261197 cri.go:89] found id: ""
	I1217 00:55:37.495807 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.495814 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:37.495819 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:37.495875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:37.520612 1261197 cri.go:89] found id: ""
	I1217 00:55:37.520625 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.520642 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:37.520648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:37.520720 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:37.547269 1261197 cri.go:89] found id: ""
	I1217 00:55:37.547283 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.547290 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:37.547299 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:37.547308 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:37.608835 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:37.608856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.635364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:37.635383 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:37.694966 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:37.694984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:37.709746 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:37.709763 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:37.775515 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.277182 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:40.287332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:40.287393 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:40.315852 1261197 cri.go:89] found id: ""
	I1217 00:55:40.315866 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.315873 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:40.315879 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:40.315936 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:40.340196 1261197 cri.go:89] found id: ""
	I1217 00:55:40.340210 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.340217 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:40.340222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:40.340279 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:40.365794 1261197 cri.go:89] found id: ""
	I1217 00:55:40.365815 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.365823 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:40.365828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:40.365899 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:40.391466 1261197 cri.go:89] found id: ""
	I1217 00:55:40.391480 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.391488 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:40.391493 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:40.391553 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:40.420286 1261197 cri.go:89] found id: ""
	I1217 00:55:40.420300 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.420307 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:40.420312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:40.420373 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:40.449247 1261197 cri.go:89] found id: ""
	I1217 00:55:40.449261 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.449268 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:40.449274 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:40.449331 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:40.474951 1261197 cri.go:89] found id: ""
	I1217 00:55:40.474965 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.474972 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:40.474980 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:40.474990 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:40.540502 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.540513 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:40.540524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:40.602747 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:40.602766 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:40.629888 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:40.629904 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:40.686174 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:40.686191 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.201825 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:43.212126 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:43.212185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:43.237088 1261197 cri.go:89] found id: ""
	I1217 00:55:43.237109 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.237115 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:43.237121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:43.237183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:43.262148 1261197 cri.go:89] found id: ""
	I1217 00:55:43.262162 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.262177 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:43.262182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:43.262239 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:43.286264 1261197 cri.go:89] found id: ""
	I1217 00:55:43.286278 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.286285 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:43.286290 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:43.286346 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:43.310644 1261197 cri.go:89] found id: ""
	I1217 00:55:43.310657 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.310664 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:43.310670 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:43.310730 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:43.335131 1261197 cri.go:89] found id: ""
	I1217 00:55:43.335146 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.335153 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:43.335158 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:43.335220 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:43.364301 1261197 cri.go:89] found id: ""
	I1217 00:55:43.364315 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.364323 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:43.364331 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:43.364390 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:43.391204 1261197 cri.go:89] found id: ""
	I1217 00:55:43.391218 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.391225 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:43.391233 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:43.391252 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:43.450751 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:43.450771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.466709 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:43.466726 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:43.533713 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:43.533723 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:43.533734 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:43.601250 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:43.601269 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.134875 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:46.146399 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:46.146468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:46.179014 1261197 cri.go:89] found id: ""
	I1217 00:55:46.179028 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.179044 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:46.179050 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:46.179115 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:46.208346 1261197 cri.go:89] found id: ""
	I1217 00:55:46.208360 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.208377 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:46.208383 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:46.208441 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:46.233331 1261197 cri.go:89] found id: ""
	I1217 00:55:46.233346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.233361 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:46.233367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:46.233423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:46.259330 1261197 cri.go:89] found id: ""
	I1217 00:55:46.259344 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.259351 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:46.259357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:46.259413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:46.283871 1261197 cri.go:89] found id: ""
	I1217 00:55:46.283885 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.283902 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:46.283907 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:46.283975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:46.308301 1261197 cri.go:89] found id: ""
	I1217 00:55:46.308316 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.308331 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:46.308337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:46.308397 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:46.332677 1261197 cri.go:89] found id: ""
	I1217 00:55:46.332691 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.332699 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:46.332706 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:46.332716 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:46.347830 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:46.347846 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:46.413688 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:46.413699 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:46.413709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:46.475238 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:46.475260 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.502692 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:46.502708 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.063356 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:49.074298 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:49.074364 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:49.102541 1261197 cri.go:89] found id: ""
	I1217 00:55:49.102555 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.102562 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:49.102567 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:49.102625 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:49.132690 1261197 cri.go:89] found id: ""
	I1217 00:55:49.132706 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.132713 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:49.132718 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:49.132780 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:49.159962 1261197 cri.go:89] found id: ""
	I1217 00:55:49.159976 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.159983 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:49.159987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:49.160047 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:49.186672 1261197 cri.go:89] found id: ""
	I1217 00:55:49.186685 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.186692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:49.186703 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:49.186760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:49.215488 1261197 cri.go:89] found id: ""
	I1217 00:55:49.215506 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.215513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:49.215518 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:49.215594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:49.243652 1261197 cri.go:89] found id: ""
	I1217 00:55:49.243667 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.243674 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:49.243680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:49.243746 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:49.271745 1261197 cri.go:89] found id: ""
	I1217 00:55:49.271762 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.271769 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:49.271777 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:49.271789 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:49.305614 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:49.305638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.361396 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:49.361414 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:49.377081 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:49.377097 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:49.448394 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:49.448405 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:49.448416 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.014619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:52.025272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:52.025334 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:52.050179 1261197 cri.go:89] found id: ""
	I1217 00:55:52.050193 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.050201 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:52.050206 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:52.050267 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:52.075171 1261197 cri.go:89] found id: ""
	I1217 00:55:52.075186 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.075193 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:52.075198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:52.075258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:52.100730 1261197 cri.go:89] found id: ""
	I1217 00:55:52.100745 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.100752 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:52.100758 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:52.100819 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:52.139001 1261197 cri.go:89] found id: ""
	I1217 00:55:52.139016 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.139023 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:52.139028 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:52.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:52.167837 1261197 cri.go:89] found id: ""
	I1217 00:55:52.167854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.167861 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:52.167876 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:52.167939 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:52.195893 1261197 cri.go:89] found id: ""
	I1217 00:55:52.195907 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.195914 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:52.195919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:52.195986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:52.226474 1261197 cri.go:89] found id: ""
	I1217 00:55:52.226489 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.226496 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:52.226504 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:52.226514 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:52.283106 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:52.283125 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:52.298214 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:52.298230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:52.368183 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:52.368194 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:52.368205 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.430851 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:52.430873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:54.962672 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.972814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:54.972874 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:54.998554 1261197 cri.go:89] found id: ""
	I1217 00:55:54.998568 1261197 logs.go:282] 0 containers: []
	W1217 00:55:54.998575 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:54.998580 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:54.998640 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:55.027159 1261197 cri.go:89] found id: ""
	I1217 00:55:55.027174 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.027181 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:55.027187 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:55.027258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:55.057204 1261197 cri.go:89] found id: ""
	I1217 00:55:55.057219 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.057226 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:55.057241 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:55.057302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:55.082858 1261197 cri.go:89] found id: ""
	I1217 00:55:55.082872 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.082880 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:55.082885 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:55.082952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:55.108074 1261197 cri.go:89] found id: ""
	I1217 00:55:55.108088 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.108095 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:55.108100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:55.108168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:55.142170 1261197 cri.go:89] found id: ""
	I1217 00:55:55.142184 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.142204 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:55.142210 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:55.142277 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:55.174306 1261197 cri.go:89] found id: ""
	I1217 00:55:55.174333 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.174341 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:55.174349 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:55.174361 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:55.234605 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:55.234625 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:55.249756 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:55.249773 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:55.312439 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:55.312450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:55.312460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:55.373256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:55.373275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:57.900997 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.911464 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:57.911522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:57.936082 1261197 cri.go:89] found id: ""
	I1217 00:55:57.936096 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.936104 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:57.936115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:57.936172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:57.960175 1261197 cri.go:89] found id: ""
	I1217 00:55:57.960190 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.960197 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:57.960202 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:57.960266 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:57.986658 1261197 cri.go:89] found id: ""
	I1217 00:55:57.986671 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.986678 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:57.986684 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:57.986743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:58.012944 1261197 cri.go:89] found id: ""
	I1217 00:55:58.012959 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.012967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:58.012973 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:58.013035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:58.041226 1261197 cri.go:89] found id: ""
	I1217 00:55:58.041241 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.041248 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:58.041253 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:58.041319 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:58.066914 1261197 cri.go:89] found id: ""
	I1217 00:55:58.066929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.066937 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:58.066943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:58.067000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:58.090571 1261197 cri.go:89] found id: ""
	I1217 00:55:58.090586 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.090593 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:58.090601 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:58.090611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:58.161546 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:58.161556 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:58.161578 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:58.230111 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:58.230131 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:58.259134 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:58.259150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:58.315698 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:58.315715 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:00.831924 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.842106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:00.842166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:00.868037 1261197 cri.go:89] found id: ""
	I1217 00:56:00.868051 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.868057 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:00.868062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:00.868138 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:00.893020 1261197 cri.go:89] found id: ""
	I1217 00:56:00.893046 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.893053 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:00.893059 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:00.893125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:00.918054 1261197 cri.go:89] found id: ""
	I1217 00:56:00.918068 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.918075 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:00.918081 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:00.918139 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:00.947584 1261197 cri.go:89] found id: ""
	I1217 00:56:00.947599 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.947607 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:00.947612 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:00.947675 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:00.974913 1261197 cri.go:89] found id: ""
	I1217 00:56:00.974929 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.974936 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:00.974941 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:00.975000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:00.998262 1261197 cri.go:89] found id: ""
	I1217 00:56:00.998276 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.998284 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:00.998289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:00.998345 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:01.025055 1261197 cri.go:89] found id: ""
	I1217 00:56:01.025071 1261197 logs.go:282] 0 containers: []
	W1217 00:56:01.025079 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:01.025099 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:01.025110 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:01.080854 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:01.080873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:01.095680 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:01.095698 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:01.174559 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:01.174574 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:01.174587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:01.240953 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:01.240973 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:03.778460 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.788536 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:03.788601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:03.816065 1261197 cri.go:89] found id: ""
	I1217 00:56:03.816080 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.816087 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:03.816093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:03.816158 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:03.840359 1261197 cri.go:89] found id: ""
	I1217 00:56:03.840373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.840381 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:03.840386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:03.840443 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:03.865338 1261197 cri.go:89] found id: ""
	I1217 00:56:03.865351 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.865359 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:03.865364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:03.865421 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:03.889916 1261197 cri.go:89] found id: ""
	I1217 00:56:03.889930 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.889937 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:03.889943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:03.890011 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:03.913782 1261197 cri.go:89] found id: ""
	I1217 00:56:03.913796 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.913804 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:03.913815 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:03.913875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:03.938356 1261197 cri.go:89] found id: ""
	I1217 00:56:03.938371 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.938379 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:03.938385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:03.938447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:03.963432 1261197 cri.go:89] found id: ""
	I1217 00:56:03.963446 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.963454 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:03.963461 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:03.963474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:04.024730 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:04.024752 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:04.057316 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:04.057331 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:04.115813 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:04.115832 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:04.133889 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:04.133905 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:04.212782 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.713766 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.723767 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:06.723837 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:06.747547 1261197 cri.go:89] found id: ""
	I1217 00:56:06.747561 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.747568 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:06.747574 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:06.747632 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:06.772850 1261197 cri.go:89] found id: ""
	I1217 00:56:06.772864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.772871 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:06.772877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:06.772942 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:06.797087 1261197 cri.go:89] found id: ""
	I1217 00:56:06.797101 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.797108 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:06.797113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:06.797171 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:06.821815 1261197 cri.go:89] found id: ""
	I1217 00:56:06.821829 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.821836 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:06.821842 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:06.821906 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:06.850207 1261197 cri.go:89] found id: ""
	I1217 00:56:06.850221 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.850229 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:06.850234 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:06.850294 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:06.874139 1261197 cri.go:89] found id: ""
	I1217 00:56:06.874153 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.874160 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:06.874166 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:06.874224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:06.899438 1261197 cri.go:89] found id: ""
	I1217 00:56:06.899453 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.899461 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:06.899469 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:06.899480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:06.967530 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.967542 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:06.967554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:07.030281 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:07.030301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:07.062210 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:07.062226 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:07.121373 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:07.121391 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:09.638141 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.648301 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:09.648359 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:09.672936 1261197 cri.go:89] found id: ""
	I1217 00:56:09.672951 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.672959 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:09.672964 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:09.673022 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:09.697500 1261197 cri.go:89] found id: ""
	I1217 00:56:09.697513 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.697520 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:09.697526 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:09.697583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:09.723330 1261197 cri.go:89] found id: ""
	I1217 00:56:09.723344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.723352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:09.723360 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:09.723423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:09.747017 1261197 cri.go:89] found id: ""
	I1217 00:56:09.747032 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.747039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:09.747044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:09.747100 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:09.771652 1261197 cri.go:89] found id: ""
	I1217 00:56:09.771666 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.771673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:09.771678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:09.771737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:09.799785 1261197 cri.go:89] found id: ""
	I1217 00:56:09.799799 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.799807 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:09.799812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:09.799871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:09.827063 1261197 cri.go:89] found id: ""
	I1217 00:56:09.827077 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.827085 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:09.827093 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:09.827103 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:09.894392 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:09.894403 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:09.894413 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:09.955961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:09.955981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:09.982364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:09.982380 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:10.051689 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:10.051709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.568963 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.579001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:12.579065 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:12.603247 1261197 cri.go:89] found id: ""
	I1217 00:56:12.603261 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.603269 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:12.603275 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:12.603332 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:12.635591 1261197 cri.go:89] found id: ""
	I1217 00:56:12.635606 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.635612 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:12.635617 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:12.635676 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:12.659802 1261197 cri.go:89] found id: ""
	I1217 00:56:12.659817 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.659824 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:12.659830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:12.659887 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:12.684671 1261197 cri.go:89] found id: ""
	I1217 00:56:12.684684 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.684692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:12.684697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:12.684766 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:12.712570 1261197 cri.go:89] found id: ""
	I1217 00:56:12.712584 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.712606 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:12.712611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:12.712668 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:12.739330 1261197 cri.go:89] found id: ""
	I1217 00:56:12.739345 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.739353 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:12.739358 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:12.739416 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:12.767372 1261197 cri.go:89] found id: ""
	I1217 00:56:12.767386 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.767393 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:12.767401 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:12.767411 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:12.822789 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:12.822807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.839685 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:12.839702 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:12.916219 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:12.916230 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:12.916241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:12.977800 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:12.977820 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.507621 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.518177 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:15.518240 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:15.544777 1261197 cri.go:89] found id: ""
	I1217 00:56:15.544792 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.544800 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:15.544806 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:15.544864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:15.569420 1261197 cri.go:89] found id: ""
	I1217 00:56:15.569433 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.569441 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:15.569447 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:15.569505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:15.594329 1261197 cri.go:89] found id: ""
	I1217 00:56:15.594344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.594352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:15.594357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:15.594417 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:15.619820 1261197 cri.go:89] found id: ""
	I1217 00:56:15.619834 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.619842 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:15.619847 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:15.619911 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:15.645055 1261197 cri.go:89] found id: ""
	I1217 00:56:15.645076 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.645084 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:15.645090 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:15.645152 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:15.671575 1261197 cri.go:89] found id: ""
	I1217 00:56:15.671590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.671597 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:15.671602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:15.671667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:15.700941 1261197 cri.go:89] found id: ""
	I1217 00:56:15.700955 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.700963 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:15.700971 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:15.700980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.728886 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:15.728931 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:15.784718 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:15.784736 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:15.799312 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:15.799335 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:15.865192 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:15.865203 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:15.865214 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.428562 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.438711 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:18.438772 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:18.465045 1261197 cri.go:89] found id: ""
	I1217 00:56:18.465060 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.465067 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:18.465073 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:18.465132 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:18.490715 1261197 cri.go:89] found id: ""
	I1217 00:56:18.490728 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.490736 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:18.490741 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:18.490799 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:18.519522 1261197 cri.go:89] found id: ""
	I1217 00:56:18.519536 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.519544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:18.519549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:18.519611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:18.545098 1261197 cri.go:89] found id: ""
	I1217 00:56:18.545112 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.545119 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:18.545125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:18.545183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:18.570978 1261197 cri.go:89] found id: ""
	I1217 00:56:18.570993 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.571000 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:18.571005 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:18.571063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:18.594800 1261197 cri.go:89] found id: ""
	I1217 00:56:18.594814 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.594822 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:18.594828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:18.594884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:18.618575 1261197 cri.go:89] found id: ""
	I1217 00:56:18.618589 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.618597 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:18.618604 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:18.618613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.680474 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:18.680494 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:18.708635 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:18.708651 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:18.763927 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:18.763949 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:18.780209 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:18.780225 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:18.849998 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.351687 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.362159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:21.362230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:21.390614 1261197 cri.go:89] found id: ""
	I1217 00:56:21.390630 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.390637 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:21.390648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:21.390716 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:21.420609 1261197 cri.go:89] found id: ""
	I1217 00:56:21.420623 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.420630 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:21.420636 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:21.420703 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:21.446943 1261197 cri.go:89] found id: ""
	I1217 00:56:21.446957 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.446964 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:21.446970 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:21.447041 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:21.477813 1261197 cri.go:89] found id: ""
	I1217 00:56:21.477828 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.477835 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:21.477841 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:21.477901 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:21.504024 1261197 cri.go:89] found id: ""
	I1217 00:56:21.504058 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.504065 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:21.504071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:21.504150 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:21.534132 1261197 cri.go:89] found id: ""
	I1217 00:56:21.534146 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.534154 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:21.534159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:21.534222 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:21.558094 1261197 cri.go:89] found id: ""
	I1217 00:56:21.558113 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.558122 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:21.558130 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:21.558141 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:21.620436 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:21.620462 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:21.635283 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:21.635301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:21.698118 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.698128 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:21.698139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:21.760016 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:21.760037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.289952 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.300354 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:24.300457 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:24.324823 1261197 cri.go:89] found id: ""
	I1217 00:56:24.324838 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.324846 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:24.324852 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:24.324921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:24.349508 1261197 cri.go:89] found id: ""
	I1217 00:56:24.349522 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.349528 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:24.349534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:24.349592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:24.375701 1261197 cri.go:89] found id: ""
	I1217 00:56:24.375716 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.375723 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:24.375729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:24.375791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:24.412359 1261197 cri.go:89] found id: ""
	I1217 00:56:24.412373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.412380 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:24.412385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:24.412447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:24.440423 1261197 cri.go:89] found id: ""
	I1217 00:56:24.440437 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.440444 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:24.440450 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:24.440511 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:24.471294 1261197 cri.go:89] found id: ""
	I1217 00:56:24.471308 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.471316 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:24.471322 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:24.471391 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:24.496845 1261197 cri.go:89] found id: ""
	I1217 00:56:24.496859 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.496866 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:24.496874 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:24.496892 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.526610 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:24.526627 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:24.583266 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:24.583327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:24.598272 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:24.598288 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:24.660553 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:24.660563 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:24.660574 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:27.222739 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.232603 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:27.232662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:27.259034 1261197 cri.go:89] found id: ""
	I1217 00:56:27.259048 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.259056 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:27.259061 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:27.259122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:27.282406 1261197 cri.go:89] found id: ""
	I1217 00:56:27.282420 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.282427 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:27.282432 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:27.282490 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:27.306518 1261197 cri.go:89] found id: ""
	I1217 00:56:27.306532 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.306540 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:27.306545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:27.306603 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:27.335278 1261197 cri.go:89] found id: ""
	I1217 00:56:27.335292 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.335299 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:27.335305 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:27.335363 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:27.359793 1261197 cri.go:89] found id: ""
	I1217 00:56:27.359808 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.359815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:27.359829 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:27.359888 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:27.399251 1261197 cri.go:89] found id: ""
	I1217 00:56:27.399275 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.399283 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:27.399289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:27.399355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:27.426464 1261197 cri.go:89] found id: ""
	I1217 00:56:27.426477 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.426495 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:27.426503 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:27.426513 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:27.458980 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:27.458996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:27.514403 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:27.514424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:27.528951 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:27.528969 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:27.592165 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:27.592175 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:27.592187 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.157841 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.168783 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:30.168847 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:30.194237 1261197 cri.go:89] found id: ""
	I1217 00:56:30.194251 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.194259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:30.194264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:30.194329 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:30.220057 1261197 cri.go:89] found id: ""
	I1217 00:56:30.220072 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.220079 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:30.220084 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:30.220141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:30.244965 1261197 cri.go:89] found id: ""
	I1217 00:56:30.244980 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.244987 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:30.244992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:30.245051 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:30.269893 1261197 cri.go:89] found id: ""
	I1217 00:56:30.269907 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.269914 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:30.269919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:30.269976 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:30.294384 1261197 cri.go:89] found id: ""
	I1217 00:56:30.294398 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.294406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:30.294411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:30.294469 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:30.325240 1261197 cri.go:89] found id: ""
	I1217 00:56:30.325254 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.325261 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:30.325266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:30.325322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:30.349591 1261197 cri.go:89] found id: ""
	I1217 00:56:30.349604 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.349611 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:30.349619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:30.349629 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:30.409349 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:30.409368 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:30.426814 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:30.426833 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:30.497852 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:30.497861 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:30.497872 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.559124 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:30.559146 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.090237 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.100535 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:33.100594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:33.124070 1261197 cri.go:89] found id: ""
	I1217 00:56:33.124085 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.124092 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:33.124098 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:33.124155 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:33.148807 1261197 cri.go:89] found id: ""
	I1217 00:56:33.148821 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.148828 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:33.148833 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:33.148894 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:33.175576 1261197 cri.go:89] found id: ""
	I1217 00:56:33.175590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.175597 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:33.175602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:33.175660 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:33.200012 1261197 cri.go:89] found id: ""
	I1217 00:56:33.200026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.200033 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:33.200038 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:33.200095 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:33.224891 1261197 cri.go:89] found id: ""
	I1217 00:56:33.224921 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.224928 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:33.224933 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:33.225001 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:33.249021 1261197 cri.go:89] found id: ""
	I1217 00:56:33.249035 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.249043 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:33.249052 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:33.249108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:33.272696 1261197 cri.go:89] found id: ""
	I1217 00:56:33.272710 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.272717 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:33.272733 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:33.272743 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:33.333826 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:33.333848 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.363111 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:33.363134 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:33.426200 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:33.426219 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:33.444135 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:33.444152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:33.510910 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.011142 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.023140 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:36.023216 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:36.051599 1261197 cri.go:89] found id: ""
	I1217 00:56:36.051614 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.051622 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:36.051628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:36.051700 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:36.076217 1261197 cri.go:89] found id: ""
	I1217 00:56:36.076231 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.076239 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:36.076244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:36.076305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:36.104998 1261197 cri.go:89] found id: ""
	I1217 00:56:36.105026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.105034 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:36.105039 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:36.105108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:36.130127 1261197 cri.go:89] found id: ""
	I1217 00:56:36.130142 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.130149 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:36.130154 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:36.130224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:36.155615 1261197 cri.go:89] found id: ""
	I1217 00:56:36.155629 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.155636 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:36.155648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:36.155709 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:36.181850 1261197 cri.go:89] found id: ""
	I1217 00:56:36.181864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.181872 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:36.181877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:36.181937 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:36.208111 1261197 cri.go:89] found id: ""
	I1217 00:56:36.208126 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.208133 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:36.208141 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:36.208152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:36.266007 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:36.266031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:36.281259 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:36.281275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:36.346325 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.346335 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:36.346345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:36.412961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:36.412981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:38.945107 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.955445 1261197 kubeadm.go:602] duration metric: took 4m3.371937848s to restartPrimaryControlPlane
	W1217 00:56:38.955509 1261197 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:56:38.955586 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 00:56:39.375604 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:56:39.388977 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:56:39.396884 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:56:39.396954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:56:39.404783 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:56:39.404792 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 00:56:39.404853 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:56:39.412686 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:56:39.412740 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:56:39.420350 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:56:39.427923 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:56:39.427975 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:56:39.435272 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.442721 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:56:39.442775 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.450389 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:56:39.458043 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:56:39.458098 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:56:39.465332 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:56:39.508240 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:56:39.508300 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:56:39.586995 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:56:39.587071 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:56:39.587116 1261197 kubeadm.go:319] OS: Linux
	I1217 00:56:39.587161 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:56:39.587217 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:56:39.587273 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:56:39.587330 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:56:39.587376 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:56:39.587433 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:56:39.587488 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:56:39.587544 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:56:39.587589 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:56:39.658303 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:56:39.658422 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:56:39.658518 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:56:39.670076 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:56:39.675448 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 00:56:39.675545 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:56:39.675618 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:56:39.675704 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:56:39.675774 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:56:39.675852 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:56:39.675914 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:56:39.675983 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:56:39.676053 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:56:39.676144 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:56:39.676224 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:56:39.676260 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:56:39.676329 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:56:39.801204 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:56:39.954898 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:56:40.065909 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:56:40.451062 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:56:40.596539 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:56:40.597062 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:56:40.600429 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:56:40.603602 1261197 out.go:252]   - Booting up control plane ...
	I1217 00:56:40.603714 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:56:40.603797 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:56:40.604963 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:56:40.625747 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:56:40.625851 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:56:40.633757 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:56:40.634255 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:56:40.634396 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:56:40.778162 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:56:40.778280 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:00:40.776324 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243331s
	I1217 01:00:40.776348 1261197 kubeadm.go:319] 
	I1217 01:00:40.776405 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:00:40.776437 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:00:40.776540 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:00:40.776544 1261197 kubeadm.go:319] 
	I1217 01:00:40.776648 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:00:40.776679 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:00:40.776709 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:00:40.776712 1261197 kubeadm.go:319] 
	I1217 01:00:40.780629 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:00:40.781051 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:00:40.781158 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:00:40.781394 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:00:40.781398 1261197 kubeadm.go:319] 
	I1217 01:00:40.781466 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:00:40.781578 1261197 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243331s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:00:40.781696 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:00:41.195061 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:00:41.209438 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:00:41.209493 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:00:41.218235 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:00:41.218244 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 01:00:41.218300 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:00:41.226394 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:00:41.226448 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:00:41.234445 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:00:41.242558 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:00:41.242613 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:00:41.250526 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.258573 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:00:41.258634 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.266278 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:00:41.274420 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:00:41.274476 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:00:41.281748 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:00:41.319491 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:00:41.319792 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:00:41.392691 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:00:41.392755 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:00:41.392789 1261197 kubeadm.go:319] OS: Linux
	I1217 01:00:41.392833 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:00:41.392880 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:00:41.392926 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:00:41.392972 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:00:41.393025 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:00:41.393072 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:00:41.393116 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:00:41.393163 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:00:41.393208 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:00:41.471655 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:00:41.471787 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:00:41.471905 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:00:41.482138 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:00:41.485739 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 01:00:41.485837 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:00:41.485905 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:00:41.485986 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:00:41.486050 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:00:41.486123 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:00:41.486180 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:00:41.486253 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:00:41.486318 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:00:41.486396 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:00:41.486478 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:00:41.486522 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:00:41.486584 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:00:41.603323 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:00:41.901106 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:00:42.054265 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:00:42.414109 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:00:42.682518 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:00:42.683180 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:00:42.685848 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:00:42.689217 1261197 out.go:252]   - Booting up control plane ...
	I1217 01:00:42.689317 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:00:42.689401 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:00:42.689468 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:00:42.713083 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:00:42.713185 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:00:42.721813 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:00:42.722110 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:00:42.722158 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:00:42.862014 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:00:42.862133 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:04:42.862018 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284909s
	I1217 01:04:42.862056 1261197 kubeadm.go:319] 
	I1217 01:04:42.862124 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:04:42.862167 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:04:42.862279 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:04:42.862283 1261197 kubeadm.go:319] 
	I1217 01:04:42.862390 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:04:42.862421 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:04:42.862451 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:04:42.862457 1261197 kubeadm.go:319] 
	I1217 01:04:42.866725 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:04:42.867116 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:04:42.867218 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:04:42.867438 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:04:42.867443 1261197 kubeadm.go:319] 
	I1217 01:04:42.867507 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:04:42.867593 1261197 kubeadm.go:403] duration metric: took 12m7.31765155s to StartCluster
	I1217 01:04:42.867623 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:04:42.867685 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:04:42.892141 1261197 cri.go:89] found id: ""
	I1217 01:04:42.892155 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.892162 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:04:42.892167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:04:42.892231 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:04:42.916795 1261197 cri.go:89] found id: ""
	I1217 01:04:42.916809 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.916817 1261197 logs.go:284] No container was found matching "etcd"
	I1217 01:04:42.916822 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:04:42.916879 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:04:42.945762 1261197 cri.go:89] found id: ""
	I1217 01:04:42.945776 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.945783 1261197 logs.go:284] No container was found matching "coredns"
	I1217 01:04:42.945794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:04:42.945850 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:04:42.970080 1261197 cri.go:89] found id: ""
	I1217 01:04:42.970094 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.970100 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:04:42.970105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:04:42.970161 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:04:42.994293 1261197 cri.go:89] found id: ""
	I1217 01:04:42.994307 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.994314 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:04:42.994319 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:04:42.994375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:04:43.019856 1261197 cri.go:89] found id: ""
	I1217 01:04:43.019871 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.019879 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:04:43.019884 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:04:43.019980 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:04:43.044643 1261197 cri.go:89] found id: ""
	I1217 01:04:43.044657 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.044664 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 01:04:43.044672 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 01:04:43.044682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:04:43.100644 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 01:04:43.100662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:04:43.115507 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:04:43.115524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:04:43.206420 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:04:43.206430 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 01:04:43.206440 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:04:43.268190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 01:04:43.268210 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 01:04:43.298717 1261197 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:04:43.298758 1261197 out.go:285] * 
	W1217 01:04:43.298817 1261197 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.298838 1261197 out.go:285] * 
	W1217 01:04:43.301057 1261197 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:04:43.305981 1261197 out.go:203] 
	W1217 01:04:43.308777 1261197 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.308838 1261197 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:04:43.308858 1261197 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:04:43.311954 1261197 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243334749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243430323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243566916Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243654818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243723127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243793503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243862870Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243933976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244147632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244278333Z" level=info msg="Connect containerd service"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244760505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.246010456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.254958867Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255148908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255207460Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255280454Z" level=info msg="Start recovering state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.295825702Z" level=info msg="Start event monitor"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296048071Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296114033Z" level=info msg="Start streaming server"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296179503Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296236685Z" level=info msg="runtime interface starting up..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296301301Z" level=info msg="starting plugins..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296367492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296572451Z" level=info msg="containerd successfully booted in 0.086094s"
	Dec 17 00:52:34 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:46.755100   21220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:46.755683   21220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:46.757556   21220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:46.758170   21220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:46.759861   21220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:04:46 up  6:47,  0 user,  load average: 0.25, 0.20, 0.51
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 01:04:43 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:43 functional-608344 kubelet[20997]: E1217 01:04:43.935311   20997 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:43 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:44 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 01:04:44 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:44 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:44 functional-608344 kubelet[21093]: E1217 01:04:44.693449   21093 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:44 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:44 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:45 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 01:04:45 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:45 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:45 functional-608344 kubelet[21107]: E1217 01:04:45.449138   21107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:45 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:45 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:04:46 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 17 01:04:46 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:46 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:04:46 functional-608344 kubelet[21135]: E1217 01:04:46.189498   21135 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:04:46 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:04:46 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (381.319018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-608344 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-608344 apply -f testdata/invalidsvc.yaml: exit status 1 (57.195862ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-608344 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-608344 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-608344 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-608344 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-608344 --alsologtostderr -v=1] stderr:
I1217 01:06:52.932787 1278114 out.go:360] Setting OutFile to fd 1 ...
I1217 01:06:52.932965 1278114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:06:52.932977 1278114 out.go:374] Setting ErrFile to fd 2...
I1217 01:06:52.932982 1278114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:06:52.933264 1278114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:06:52.933527 1278114 mustload.go:66] Loading cluster: functional-608344
I1217 01:06:52.933989 1278114 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:06:52.934469 1278114 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:06:52.952641 1278114 host.go:66] Checking if "functional-608344" exists ...
I1217 01:06:52.953003 1278114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:06:53.011151 1278114 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.999991713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:06:53.011286 1278114 api_server.go:166] Checking apiserver status ...
I1217 01:06:53.011354 1278114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 01:06:53.011397 1278114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:06:53.029096 1278114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
W1217 01:06:53.127418 1278114 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 01:06:53.130652 1278114 out.go:179] * The control-plane node functional-608344 apiserver is not running: (state=Stopped)
I1217 01:06:53.133635 1278114 out.go:179]   To start a cluster, run: "minikube start -p functional-608344"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (314.826498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-608344 service hello-node --url --format={{.IP}}                                                                                         │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ service   │ functional-608344 service hello-node --url                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh -- ls -la /mount-9p                                                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh cat /mount-9p/test-1765933603668091719                                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh sudo umount -f /mount-9p                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4153205073/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh -- ls -la /mount-9p                                                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh sudo umount -f /mount-9p                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount1 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount2 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount3 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh findmnt -T /mount1                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh findmnt -T /mount2                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh findmnt -T /mount3                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ mount     │ -p functional-608344 --kill=true                                                                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-608344 --alsologtostderr -v=1                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:06:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:06:52.678226 1278041 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:06:52.678416 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678444 1278041 out.go:374] Setting ErrFile to fd 2...
	I1217 01:06:52.678471 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678859 1278041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:06:52.679342 1278041 out.go:368] Setting JSON to false
	I1217 01:06:52.680559 1278041 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24563,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:06:52.680658 1278041 start.go:143] virtualization:  
	I1217 01:06:52.683773 1278041 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:06:52.687734 1278041 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:06:52.687819 1278041 notify.go:221] Checking for updates...
	I1217 01:06:52.693713 1278041 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:06:52.696651 1278041 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:06:52.699691 1278041 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:06:52.702755 1278041 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:06:52.705750 1278041 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:06:52.709191 1278041 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:06:52.709996 1278041 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:06:52.733434 1278041 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:06:52.733546 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.795063 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.785060306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.795175 1278041 docker.go:319] overlay module found
	I1217 01:06:52.798295 1278041 out.go:179] * Using the docker driver based on existing profile
	I1217 01:06:52.801102 1278041 start.go:309] selected driver: docker
	I1217 01:06:52.801151 1278041 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.801250 1278041 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:06:52.801362 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.856592 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.847153487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.857047 1278041 cni.go:84] Creating CNI manager for ""
	I1217 01:06:52.857113 1278041 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:06:52.857152 1278041 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.860228 1278041 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243334749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243430323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243566916Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243654818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243723127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243793503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243862870Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243933976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244147632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244278333Z" level=info msg="Connect containerd service"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244760505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.246010456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.254958867Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255148908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255207460Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255280454Z" level=info msg="Start recovering state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.295825702Z" level=info msg="Start event monitor"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296048071Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296114033Z" level=info msg="Start streaming server"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296179503Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296236685Z" level=info msg="runtime interface starting up..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296301301Z" level=info msg="starting plugins..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296367492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296572451Z" level=info msg="containerd successfully booted in 0.086094s"
	Dec 17 00:52:34 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:06:54.173068   23256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:54.174085   23256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:54.175744   23256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:54.176385   23256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:54.177610   23256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:06:54 up  6:49,  0 user,  load average: 0.51, 0.28, 0.50
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:06:50 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 17 01:06:51 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:51 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:51 functional-608344 kubelet[23104]: E1217 01:06:51.425161   23104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 17 01:06:52 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:52 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:52 functional-608344 kubelet[23126]: E1217 01:06:52.181560   23126 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 17 01:06:52 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:52 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:52 functional-608344 kubelet[23139]: E1217 01:06:52.926578   23139 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:52 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:53 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 17 01:06:53 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:53 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:53 functional-608344 kubelet[23169]: E1217 01:06:53.668902   23169 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:53 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:53 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (309.190535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 status: exit status 2 (304.948846ms)

                                                
                                                
-- stdout --
	functional-608344
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-608344 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (289.329337ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-608344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 status -o json: exit status 2 (335.389875ms)

                                                
                                                
-- stdout --
	{"Name":"functional-608344","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-608344 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (305.827921ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-608344 addons list -o json                                                                                                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ service │ functional-608344 service list                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ service │ functional-608344 service list -o json                                                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ service │ functional-608344 service --namespace=default --https --url hello-node                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ service │ functional-608344 service hello-node --url --format={{.IP}}                                                                                         │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ service │ functional-608344 service hello-node --url                                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh     │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount   │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh     │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh -- ls -la /mount-9p                                                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh cat /mount-9p/test-1765933603668091719                                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh     │ functional-608344 ssh sudo umount -f /mount-9p                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount   │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4153205073/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh     │ functional-608344 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh -- ls -la /mount-9p                                                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh sudo umount -f /mount-9p                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount   │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount1 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount   │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount2 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount   │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount3 --alsologtostderr -v=1                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh     │ functional-608344 ssh findmnt -T /mount1                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh findmnt -T /mount2                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh     │ functional-608344 ssh findmnt -T /mount3                                                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ mount   │ -p functional-608344 --kill=true                                                                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:52:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:52:31.527617 1261197 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:31.527758 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527763 1261197 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:31.527767 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527997 1261197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:52:31.528338 1261197 out.go:368] Setting JSON to false
	I1217 00:52:31.529124 1261197 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23702,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:52:31.529179 1261197 start.go:143] virtualization:  
	I1217 00:52:31.532534 1261197 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:52:31.537145 1261197 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:52:31.537272 1261197 notify.go:221] Checking for updates...
	I1217 00:52:31.542910 1261197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:31.545800 1261197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:52:31.548609 1261197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:52:31.551556 1261197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:52:31.554346 1261197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:31.557970 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:31.558066 1261197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:31.587498 1261197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:52:31.587608 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.650823 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.641966313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.650910 1261197 docker.go:319] overlay module found
	I1217 00:52:31.653844 1261197 out.go:179] * Using the docker driver based on existing profile
	I1217 00:52:31.656662 1261197 start.go:309] selected driver: docker
	I1217 00:52:31.656669 1261197 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.656773 1261197 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:31.656888 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.710052 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.70077893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.710641 1261197 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:52:31.710676 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:31.710788 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:31.710847 1261197 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.713993 1261197 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:52:31.716755 1261197 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:52:31.719575 1261197 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:52:31.722367 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:31.722402 1261197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:52:31.722423 1261197 cache.go:65] Caching tarball of preloaded images
	I1217 00:52:31.722451 1261197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:52:31.722505 1261197 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:52:31.722513 1261197 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:52:31.722616 1261197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:52:31.740561 1261197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:52:31.740571 1261197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:52:31.740584 1261197 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:52:31.740613 1261197 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:52:31.740665 1261197 start.go:364] duration metric: took 37.006µs to acquireMachinesLock for "functional-608344"
	I1217 00:52:31.740682 1261197 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:52:31.740687 1261197 fix.go:54] fixHost starting: 
	I1217 00:52:31.740957 1261197 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:52:31.756910 1261197 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:52:31.756929 1261197 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:52:31.760018 1261197 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:52:31.760042 1261197 machine.go:94] provisionDockerMachine start ...
	I1217 00:52:31.760119 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.776640 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.776960 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.776966 1261197 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:52:31.905356 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:31.905370 1261197 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:52:31.905445 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.925834 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.926164 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.926177 1261197 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:52:32.067014 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:32.067088 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.084172 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:32.084485 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:32.084499 1261197 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:52:32.214216 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:52:32.214232 1261197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:52:32.214253 1261197 ubuntu.go:190] setting up certificates
	I1217 00:52:32.214268 1261197 provision.go:84] configureAuth start
	I1217 00:52:32.214325 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:32.232515 1261197 provision.go:143] copyHostCerts
	I1217 00:52:32.232580 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:52:32.232588 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:52:32.232671 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:52:32.232772 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:52:32.232776 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:52:32.232801 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:52:32.232878 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:52:32.232885 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:52:32.232913 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:52:32.232967 1261197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:52:32.616759 1261197 provision.go:177] copyRemoteCerts
	I1217 00:52:32.616824 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:52:32.616864 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.638193 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.737540 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:52:32.755258 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:52:32.772709 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:52:32.791423 1261197 provision.go:87] duration metric: took 577.141949ms to configureAuth
	I1217 00:52:32.791441 1261197 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:52:32.791635 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:32.791640 1261197 machine.go:97] duration metric: took 1.031594088s to provisionDockerMachine
	I1217 00:52:32.791646 1261197 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:52:32.791656 1261197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:52:32.791701 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:52:32.791750 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.809559 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.905557 1261197 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:52:32.908787 1261197 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:52:32.908827 1261197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:52:32.908837 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:52:32.908891 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:52:32.908975 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:52:32.909048 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:52:32.909089 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:52:32.916399 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:32.933317 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:52:32.950047 1261197 start.go:296] duration metric: took 158.386583ms for postStartSetup
	I1217 00:52:32.950118 1261197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:52:32.950170 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.968857 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.062653 1261197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:52:33.067278 1261197 fix.go:56] duration metric: took 1.32658398s for fixHost
	I1217 00:52:33.067294 1261197 start.go:83] releasing machines lock for "functional-608344", held for 1.326621929s
	I1217 00:52:33.067361 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:33.084000 1261197 ssh_runner.go:195] Run: cat /version.json
	I1217 00:52:33.084040 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.084288 1261197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:52:33.084348 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.108566 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.111371 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.289488 1261197 ssh_runner.go:195] Run: systemctl --version
	I1217 00:52:33.296034 1261197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:52:33.300233 1261197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:52:33.300292 1261197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:52:33.307943 1261197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:52:33.307957 1261197 start.go:496] detecting cgroup driver to use...
	I1217 00:52:33.307988 1261197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:52:33.308034 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:52:33.325973 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:52:33.341243 1261197 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:52:33.341313 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:52:33.357700 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:52:33.373469 1261197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:52:33.498827 1261197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:52:33.614529 1261197 docker.go:234] disabling docker service ...
	I1217 00:52:33.614598 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:52:33.629592 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:52:33.642692 1261197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:52:33.771770 1261197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:52:33.894226 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:52:33.907337 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:52:33.922634 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:52:33.932171 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:52:33.941438 1261197 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:52:33.941508 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:52:33.950063 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.958782 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:52:33.967078 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.975466 1261197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:52:33.983339 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:52:33.991895 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:52:34.000351 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:52:34.010891 1261197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:52:34.018879 1261197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:52:34.026594 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.150165 1261197 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:52:34.299897 1261197 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:52:34.299958 1261197 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:52:34.303895 1261197 start.go:564] Will wait 60s for crictl version
	I1217 00:52:34.303948 1261197 ssh_runner.go:195] Run: which crictl
	I1217 00:52:34.307381 1261197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:52:34.334814 1261197 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:52:34.334888 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.355644 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.381331 1261197 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:52:34.384165 1261197 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:52:34.399831 1261197 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:52:34.407243 1261197 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:52:34.410160 1261197 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:52:34.410312 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:34.410394 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.434882 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.434894 1261197 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:52:34.434955 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.460154 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.460166 1261197 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:52:34.460173 1261197 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:52:34.460276 1261197 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:52:34.460340 1261197 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:52:34.485418 1261197 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:52:34.485440 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:34.485447 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:34.485462 1261197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:52:34.485483 1261197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:52:34.485591 1261197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:52:34.485688 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:52:34.493475 1261197 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:52:34.493536 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:52:34.501738 1261197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:52:34.515117 1261197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:52:34.528350 1261197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1217 00:52:34.541325 1261197 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:52:34.545027 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.663222 1261197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:52:34.871198 1261197 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:52:34.871209 1261197 certs.go:195] generating shared ca certs ...
	I1217 00:52:34.871223 1261197 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:52:34.871350 1261197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:52:34.871405 1261197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:52:34.871411 1261197 certs.go:257] generating profile certs ...
	I1217 00:52:34.871503 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:52:34.871558 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:52:34.871595 1261197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:52:34.871710 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:52:34.871738 1261197 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:52:34.871746 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:52:34.871770 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:52:34.871791 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:52:34.871819 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:52:34.871867 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:34.872533 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:52:34.890674 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:52:34.908252 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:52:34.925752 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:52:34.942982 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:52:34.961072 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:52:34.978793 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:52:34.995794 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:52:35.016106 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:52:35.035474 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:52:35.054248 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:52:35.072025 1261197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:52:35.085836 1261197 ssh_runner.go:195] Run: openssl version
	I1217 00:52:35.092498 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.100138 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:52:35.107992 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111748 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111805 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.153206 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:52:35.161118 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.168560 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:52:35.176276 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180431 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180496 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.224274 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:52:35.231870 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.239209 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:52:35.246988 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250581 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250708 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.291833 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:52:35.299197 1261197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:52:35.302994 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:52:35.343876 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:52:35.384935 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:52:35.425945 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:52:35.468160 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:52:35.509040 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:52:35.549950 1261197 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:35.550030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:52:35.550101 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.575493 1261197 cri.go:89] found id: ""
	I1217 00:52:35.575551 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:52:35.583488 1261197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:52:35.583498 1261197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:52:35.583562 1261197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:52:35.590939 1261197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.591435 1261197 kubeconfig.go:125] found "functional-608344" server: "https://192.168.49.2:8441"
	I1217 00:52:35.592674 1261197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:52:35.600478 1261197 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:38:00.276726971 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:52:34.535031442 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:52:35.600490 1261197 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:52:35.600503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 00:52:35.600556 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.635394 1261197 cri.go:89] found id: ""
	I1217 00:52:35.635452 1261197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:52:35.655954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:52:35.664843 1261197 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 00:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 00:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 17 00:42 /etc/kubernetes/scheduler.conf
	
	I1217 00:52:35.664920 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:52:35.673926 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:52:35.681783 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.681837 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:52:35.689482 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.698370 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.698438 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.705988 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:52:35.714414 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.714484 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:52:35.722072 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:52:35.729848 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:35.776855 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.711300 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.926722 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.999232 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:37.047947 1261197 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:52:37.048019 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:37.548207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.048861 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.048206 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.548189 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.049366 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.548557 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.048152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.549106 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.048793 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.549138 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.049014 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.048840 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.048979 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.549120 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.049193 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.548932 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.048207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.548119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.048127 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.548295 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.049080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.548771 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.048210 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.548773 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.048258 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.549096 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.048188 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.049033 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.549038 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.048512 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.548619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.048253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.549044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.048294 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.548919 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.048218 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.548855 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.048880 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.548221 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.048194 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.548710 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.048613 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.548834 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.049119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.548167 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.048599 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.549080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.048587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.548846 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.048217 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.049020 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.548398 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.049097 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.548960 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.049065 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.548376 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.048388 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.548808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.048244 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.548239 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.049099 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.549083 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.549030 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.048350 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.548287 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.048923 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.048292 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.549092 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.048874 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.549144 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.048777 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.548153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.048868 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.548124 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.048936 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.048238 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.048954 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.548662 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.049044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.548942 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.048968 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.548787 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.048489 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.548243 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.549178 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.048993 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.548676 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.049104 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.048853 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.549118 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.048215 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.549153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.048154 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.549126 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.048949 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.048782 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.548760 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.048205 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.049183 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.548231 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.549031 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.048208 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.548852 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:37.048332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:37.048420 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:37.076924 1261197 cri.go:89] found id: ""
	I1217 00:53:37.076939 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.076947 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:37.076953 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:37.077010 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:37.103936 1261197 cri.go:89] found id: ""
	I1217 00:53:37.103950 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.103957 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:37.103962 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:37.104019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:37.134578 1261197 cri.go:89] found id: ""
	I1217 00:53:37.134592 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.134599 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:37.134605 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:37.134667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:37.162973 1261197 cri.go:89] found id: ""
	I1217 00:53:37.162986 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.162994 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:37.162999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:37.163063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:37.193768 1261197 cri.go:89] found id: ""
	I1217 00:53:37.193782 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.193789 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:37.193794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:37.193864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:37.217378 1261197 cri.go:89] found id: ""
	I1217 00:53:37.217391 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.217398 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:37.217403 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:37.217464 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:37.245938 1261197 cri.go:89] found id: ""
	I1217 00:53:37.245952 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.245959 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:37.245967 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:37.245977 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:37.303279 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:37.303297 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:37.317809 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:37.317826 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:37.378847 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:37.378858 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:37.378870 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:37.440776 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:37.440795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:39.970536 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:39.980652 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:39.980714 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:40.014928 1261197 cri.go:89] found id: ""
	I1217 00:53:40.014943 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.014950 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:40.014956 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:40.015027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:40.044249 1261197 cri.go:89] found id: ""
	I1217 00:53:40.044284 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.044292 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:40.044299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:40.044375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:40.071071 1261197 cri.go:89] found id: ""
	I1217 00:53:40.071086 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.071094 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:40.071100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:40.071166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:40.096922 1261197 cri.go:89] found id: ""
	I1217 00:53:40.096936 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.096944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:40.096950 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:40.097019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:40.126209 1261197 cri.go:89] found id: ""
	I1217 00:53:40.126223 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.126231 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:40.126237 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:40.126302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:40.166443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.166457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.166465 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:40.166470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:40.166532 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:40.194443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.194457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.194465 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:40.194472 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:40.194483 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:40.249960 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:40.249980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:40.264714 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:40.264730 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:40.334158 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:40.334168 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:40.334179 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:40.396176 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:40.396196 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:42.927525 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:42.939255 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:42.939317 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:42.967766 1261197 cri.go:89] found id: ""
	I1217 00:53:42.967780 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.967788 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:42.967793 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:42.967852 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:42.992216 1261197 cri.go:89] found id: ""
	I1217 00:53:42.992230 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.992238 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:42.992244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:42.992301 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:43.018174 1261197 cri.go:89] found id: ""
	I1217 00:53:43.018188 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.018196 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:43.018201 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:43.018260 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:43.043673 1261197 cri.go:89] found id: ""
	I1217 00:53:43.043687 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.043695 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:43.043701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:43.043763 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:43.067990 1261197 cri.go:89] found id: ""
	I1217 00:53:43.068005 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.068012 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:43.068017 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:43.068079 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:43.093908 1261197 cri.go:89] found id: ""
	I1217 00:53:43.093923 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.093930 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:43.093936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:43.093995 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:43.120199 1261197 cri.go:89] found id: ""
	I1217 00:53:43.120213 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.120220 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:43.120228 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:43.120238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:43.181971 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:43.181989 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:43.197524 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:43.197541 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:43.261336 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:43.261356 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:43.261366 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:43.322519 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:43.322538 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:45.852691 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:45.863769 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:45.863831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:45.888335 1261197 cri.go:89] found id: ""
	I1217 00:53:45.888350 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.888357 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:45.888363 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:45.888422 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:45.918194 1261197 cri.go:89] found id: ""
	I1217 00:53:45.918209 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.918216 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:45.918222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:45.918285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:45.943809 1261197 cri.go:89] found id: ""
	I1217 00:53:45.943824 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.943831 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:45.943836 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:45.943893 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:45.969167 1261197 cri.go:89] found id: ""
	I1217 00:53:45.969182 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.969189 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:45.969195 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:45.969261 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:45.995411 1261197 cri.go:89] found id: ""
	I1217 00:53:45.995425 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.995432 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:45.995437 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:45.995495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:46.025138 1261197 cri.go:89] found id: ""
	I1217 00:53:46.025153 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.025161 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:46.025167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:46.025230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:46.052563 1261197 cri.go:89] found id: ""
	I1217 00:53:46.052578 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.052585 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:46.052594 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:46.052604 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:46.110268 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:46.110286 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:46.128213 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:46.128230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:46.211985 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:46.212008 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:46.212018 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:46.274022 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:46.274041 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:48.809808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:48.820115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:48.820172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:48.846046 1261197 cri.go:89] found id: ""
	I1217 00:53:48.846062 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.846069 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:48.846075 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:48.846145 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:48.871706 1261197 cri.go:89] found id: ""
	I1217 00:53:48.871721 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.871728 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:48.871734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:48.871794 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:48.896325 1261197 cri.go:89] found id: ""
	I1217 00:53:48.896341 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.896348 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:48.896353 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:48.896413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:48.922321 1261197 cri.go:89] found id: ""
	I1217 00:53:48.922335 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.922342 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:48.922348 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:48.922406 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:48.951311 1261197 cri.go:89] found id: ""
	I1217 00:53:48.951325 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.951332 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:48.951337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:48.951395 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:48.976196 1261197 cri.go:89] found id: ""
	I1217 00:53:48.976211 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.976218 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:48.976224 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:48.976285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:49.005156 1261197 cri.go:89] found id: ""
	I1217 00:53:49.005173 1261197 logs.go:282] 0 containers: []
	W1217 00:53:49.005181 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:49.005190 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:49.005202 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:49.067318 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:49.067385 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:49.083407 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:49.083424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:49.159947 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:49.159958 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:49.159970 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:49.230934 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:49.230956 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:51.761379 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:51.771759 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:51.771821 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:51.796369 1261197 cri.go:89] found id: ""
	I1217 00:53:51.796384 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.796391 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:51.796396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:51.796454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:51.822318 1261197 cri.go:89] found id: ""
	I1217 00:53:51.822333 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.822340 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:51.822345 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:51.822409 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:51.847395 1261197 cri.go:89] found id: ""
	I1217 00:53:51.847409 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.847416 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:51.847421 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:51.847479 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:51.872529 1261197 cri.go:89] found id: ""
	I1217 00:53:51.872544 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.872552 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:51.872557 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:51.872619 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:51.900871 1261197 cri.go:89] found id: ""
	I1217 00:53:51.900885 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.900893 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:51.900898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:51.900967 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:51.928534 1261197 cri.go:89] found id: ""
	I1217 00:53:51.928548 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.928555 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:51.928560 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:51.928621 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:51.957597 1261197 cri.go:89] found id: ""
	I1217 00:53:51.957611 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.957619 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:51.957627 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:51.957636 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:52.016924 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:52.016945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:52.033440 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:52.033458 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:52.106352 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:52.106373 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:52.106384 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:52.173915 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:52.173934 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:54.703159 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:54.713797 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:54.713862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:54.739273 1261197 cri.go:89] found id: ""
	I1217 00:53:54.739287 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.739294 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:54.739299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:54.739355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:54.770340 1261197 cri.go:89] found id: ""
	I1217 00:53:54.770355 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.770362 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:54.770367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:54.770430 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:54.795583 1261197 cri.go:89] found id: ""
	I1217 00:53:54.795597 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.795604 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:54.795611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:54.795670 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:54.823673 1261197 cri.go:89] found id: ""
	I1217 00:53:54.823688 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.823696 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:54.823701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:54.823760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:54.849899 1261197 cri.go:89] found id: ""
	I1217 00:53:54.849913 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.849921 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:54.849927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:54.849986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:54.874746 1261197 cri.go:89] found id: ""
	I1217 00:53:54.874761 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.874767 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:54.874773 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:54.874831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:54.898944 1261197 cri.go:89] found id: ""
	I1217 00:53:54.898961 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.898968 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:54.898975 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:54.898986 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:54.913535 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:54.913552 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:54.975130 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:54.975140 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:54.975150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:55.037117 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:55.037139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:55.067838 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:55.067855 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.627174 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:57.637082 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:57.637153 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:57.661527 1261197 cri.go:89] found id: ""
	I1217 00:53:57.661541 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.661548 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:57.661553 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:57.661611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:57.685175 1261197 cri.go:89] found id: ""
	I1217 00:53:57.685189 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.685200 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:57.685205 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:57.685263 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:57.711702 1261197 cri.go:89] found id: ""
	I1217 00:53:57.711717 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.711724 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:57.711729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:57.711868 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:57.740036 1261197 cri.go:89] found id: ""
	I1217 00:53:57.740050 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.740058 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:57.740063 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:57.740122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:57.768675 1261197 cri.go:89] found id: ""
	I1217 00:53:57.768697 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.768704 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:57.768710 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:57.768775 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:57.792870 1261197 cri.go:89] found id: ""
	I1217 00:53:57.792883 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.792890 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:57.792895 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:57.792965 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:57.817001 1261197 cri.go:89] found id: ""
	I1217 00:53:57.817015 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.817022 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:57.817031 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:57.817053 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.871861 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:57.871881 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:57.886738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:57.886755 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:57.949301 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:57.949319 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:57.949329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:58.010230 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:58.010249 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.540430 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:00.550751 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:00.550814 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:00.576488 1261197 cri.go:89] found id: ""
	I1217 00:54:00.576501 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.576510 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:00.576515 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:00.576573 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:00.601369 1261197 cri.go:89] found id: ""
	I1217 00:54:00.601383 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.601396 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:00.601401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:00.601459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:00.625632 1261197 cri.go:89] found id: ""
	I1217 00:54:00.625667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.625675 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:00.625680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:00.625738 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:00.651689 1261197 cri.go:89] found id: ""
	I1217 00:54:00.651703 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.651710 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:00.651715 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:00.651777 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:00.679744 1261197 cri.go:89] found id: ""
	I1217 00:54:00.679757 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.679765 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:00.679770 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:00.679828 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:00.709559 1261197 cri.go:89] found id: ""
	I1217 00:54:00.709573 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.709580 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:00.709585 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:00.709662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:00.734417 1261197 cri.go:89] found id: ""
	I1217 00:54:00.734432 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.734439 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:00.734447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:00.734457 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.797638 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.797675 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:00.797685 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:00.859579 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.859598 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.885766 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:00.885783 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:00.946324 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:00.946344 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.461934 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:03.472673 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:03.472733 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:03.496966 1261197 cri.go:89] found id: ""
	I1217 00:54:03.496980 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.496987 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:03.496992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:03.497048 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:03.522192 1261197 cri.go:89] found id: ""
	I1217 00:54:03.522207 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.522214 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:03.522219 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:03.522280 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:03.547069 1261197 cri.go:89] found id: ""
	I1217 00:54:03.547083 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.547090 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:03.547095 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:03.547175 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:03.572136 1261197 cri.go:89] found id: ""
	I1217 00:54:03.572149 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.572156 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:03.572162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:03.572234 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:03.600755 1261197 cri.go:89] found id: ""
	I1217 00:54:03.600770 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.600782 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:03.600788 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:03.600859 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:03.629818 1261197 cri.go:89] found id: ""
	I1217 00:54:03.629836 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.629843 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:03.629849 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:03.629905 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:03.656769 1261197 cri.go:89] found id: ""
	I1217 00:54:03.656783 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.656790 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:03.656797 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:03.656807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:03.712292 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:03.712313 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.727502 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:03.727518 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:03.791668 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:03.791678 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:03.791688 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:03.854180 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:03.854200 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:06.381966 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:06.393097 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:06.393156 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:06.429087 1261197 cri.go:89] found id: ""
	I1217 00:54:06.429101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.429108 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:06.429113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:06.429189 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:06.454075 1261197 cri.go:89] found id: ""
	I1217 00:54:06.454091 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.454101 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:06.454106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:06.454179 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:06.478067 1261197 cri.go:89] found id: ""
	I1217 00:54:06.478081 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.478088 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:06.478093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:06.478149 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:06.503508 1261197 cri.go:89] found id: ""
	I1217 00:54:06.503522 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.503529 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:06.503534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:06.503592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:06.532125 1261197 cri.go:89] found id: ""
	I1217 00:54:06.532139 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.532146 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:06.532151 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:06.532218 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:06.557383 1261197 cri.go:89] found id: ""
	I1217 00:54:06.557397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.557404 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:06.557409 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:06.557482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:06.583086 1261197 cri.go:89] found id: ""
	I1217 00:54:06.583101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.583109 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:06.583117 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:06.583128 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:06.638133 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:06.638153 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:06.652420 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:06.652439 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:06.715679 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:06.715692 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:06.715703 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:06.783529 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:06.783557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.314587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:09.324947 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:09.325009 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:09.349922 1261197 cri.go:89] found id: ""
	I1217 00:54:09.349945 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.349952 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:09.349957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:09.350025 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:09.381538 1261197 cri.go:89] found id: ""
	I1217 00:54:09.381552 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.381560 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:09.381565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:09.381627 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:09.412584 1261197 cri.go:89] found id: ""
	I1217 00:54:09.412606 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.412613 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:09.412621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:09.412696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:09.446518 1261197 cri.go:89] found id: ""
	I1217 00:54:09.446533 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.446541 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:09.446547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:09.446620 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:09.477943 1261197 cri.go:89] found id: ""
	I1217 00:54:09.477956 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.477963 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:09.477968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:09.478027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:09.503386 1261197 cri.go:89] found id: ""
	I1217 00:54:09.503400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.503407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:09.503413 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:09.503476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:09.528266 1261197 cri.go:89] found id: ""
	I1217 00:54:09.528292 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.528300 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:09.528308 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:09.528318 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:09.590766 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:09.590786 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.618540 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:09.618556 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:09.675017 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:09.675037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:09.689541 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:09.689557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:09.753013 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.253253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:12.263867 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:12.263926 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:12.289871 1261197 cri.go:89] found id: ""
	I1217 00:54:12.289888 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.289904 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:12.289910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:12.289975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:12.316441 1261197 cri.go:89] found id: ""
	I1217 00:54:12.316455 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.316462 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:12.316467 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:12.316527 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:12.340348 1261197 cri.go:89] found id: ""
	I1217 00:54:12.340362 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.340370 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:12.340375 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:12.340432 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:12.364082 1261197 cri.go:89] found id: ""
	I1217 00:54:12.364097 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.364104 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:12.364109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:12.364167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:12.390849 1261197 cri.go:89] found id: ""
	I1217 00:54:12.390863 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.390870 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:12.390875 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:12.390933 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:12.420430 1261197 cri.go:89] found id: ""
	I1217 00:54:12.420444 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.420451 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:12.420456 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:12.420518 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:12.448205 1261197 cri.go:89] found id: ""
	I1217 00:54:12.448221 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.448228 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:12.448236 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:12.448247 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:12.504931 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:12.504952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:12.519968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:12.519985 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:12.584010 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.584021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:12.584032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:12.647102 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:12.647123 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:15.176013 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:15.186921 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:15.186985 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:15.215197 1261197 cri.go:89] found id: ""
	I1217 00:54:15.215211 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.215218 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:15.215226 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:15.215284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:15.240116 1261197 cri.go:89] found id: ""
	I1217 00:54:15.240130 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.240137 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:15.240142 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:15.240201 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:15.267788 1261197 cri.go:89] found id: ""
	I1217 00:54:15.267802 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.267809 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:15.267814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:15.267871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:15.291699 1261197 cri.go:89] found id: ""
	I1217 00:54:15.291713 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.291720 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:15.291725 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:15.291782 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:15.315522 1261197 cri.go:89] found id: ""
	I1217 00:54:15.315536 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.315542 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:15.315548 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:15.315609 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:15.340325 1261197 cri.go:89] found id: ""
	I1217 00:54:15.340339 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.340346 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:15.340361 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:15.340423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:15.369889 1261197 cri.go:89] found id: ""
	I1217 00:54:15.369917 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.369924 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:15.369932 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:15.369942 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:15.428658 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:15.428679 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:15.444080 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:15.444099 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:15.512831 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:15.512843 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:15.512861 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:15.578043 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:15.578063 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.110567 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:18.120744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:18.120802 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:18.145094 1261197 cri.go:89] found id: ""
	I1217 00:54:18.145108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.145116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:18.145122 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:18.145185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:18.169518 1261197 cri.go:89] found id: ""
	I1217 00:54:18.169532 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.169542 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:18.169547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:18.169607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:18.194342 1261197 cri.go:89] found id: ""
	I1217 00:54:18.194356 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.194363 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:18.194369 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:18.194427 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:18.222931 1261197 cri.go:89] found id: ""
	I1217 00:54:18.222944 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.222952 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:18.222957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:18.223015 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:18.246707 1261197 cri.go:89] found id: ""
	I1217 00:54:18.246721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.246728 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:18.246734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:18.246792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:18.276152 1261197 cri.go:89] found id: ""
	I1217 00:54:18.276172 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.276180 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:18.276185 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:18.276250 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:18.300697 1261197 cri.go:89] found id: ""
	I1217 00:54:18.300711 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.300718 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:18.300725 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:18.300735 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:18.365628 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:18.365661 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:18.365671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:18.437541 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:18.437560 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.465122 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:18.465138 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:18.522977 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:18.522997 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.040317 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:21.050538 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:21.050601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:21.074720 1261197 cri.go:89] found id: ""
	I1217 00:54:21.074734 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.074741 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:21.074746 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:21.074808 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:21.099388 1261197 cri.go:89] found id: ""
	I1217 00:54:21.099402 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.099409 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:21.099414 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:21.099471 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:21.123589 1261197 cri.go:89] found id: ""
	I1217 00:54:21.123603 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.123616 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:21.123621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:21.123680 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:21.149246 1261197 cri.go:89] found id: ""
	I1217 00:54:21.149260 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.149267 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:21.149272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:21.149330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:21.175795 1261197 cri.go:89] found id: ""
	I1217 00:54:21.175809 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.175815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:21.175821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:21.175878 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:21.200104 1261197 cri.go:89] found id: ""
	I1217 00:54:21.200118 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.200125 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:21.200131 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:21.200191 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:21.228601 1261197 cri.go:89] found id: ""
	I1217 00:54:21.228615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.228622 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:21.228630 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:21.228642 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:21.285141 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:21.285160 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.300538 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:21.300554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:21.368570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:21.368590 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:21.368601 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:21.438594 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:21.438613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:23.967152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:23.977246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:23.977330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:24.002158 1261197 cri.go:89] found id: ""
	I1217 00:54:24.002175 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.002183 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:24.002189 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:24.002297 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:24.034702 1261197 cri.go:89] found id: ""
	I1217 00:54:24.034716 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.034723 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:24.034728 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:24.034788 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:24.059383 1261197 cri.go:89] found id: ""
	I1217 00:54:24.059397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.059404 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:24.059410 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:24.059466 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:24.088018 1261197 cri.go:89] found id: ""
	I1217 00:54:24.088032 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.088039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:24.088044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:24.088101 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:24.112493 1261197 cri.go:89] found id: ""
	I1217 00:54:24.112507 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.112514 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:24.112519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:24.112575 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:24.139798 1261197 cri.go:89] found id: ""
	I1217 00:54:24.139813 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.139819 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:24.139825 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:24.139886 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:24.164994 1261197 cri.go:89] found id: ""
	I1217 00:54:24.165008 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.165015 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:24.165022 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:24.165032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:24.224418 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:24.224438 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:24.239090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:24.239107 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:24.307181 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:24.307192 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:24.307203 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:24.369600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:24.369620 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:26.910110 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:26.920271 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:26.920343 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:26.947883 1261197 cri.go:89] found id: ""
	I1217 00:54:26.947897 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.947908 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:26.947913 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:26.947987 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:26.973290 1261197 cri.go:89] found id: ""
	I1217 00:54:26.973304 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.973312 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:26.973318 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:26.973377 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:26.997246 1261197 cri.go:89] found id: ""
	I1217 00:54:26.997261 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.997268 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:26.997272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:26.997328 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:27.023408 1261197 cri.go:89] found id: ""
	I1217 00:54:27.023422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.023429 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:27.023434 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:27.023494 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:27.051626 1261197 cri.go:89] found id: ""
	I1217 00:54:27.051640 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.051648 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:27.051653 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:27.051713 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:27.076431 1261197 cri.go:89] found id: ""
	I1217 00:54:27.076445 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.076452 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:27.076458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:27.076522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:27.101707 1261197 cri.go:89] found id: ""
	I1217 00:54:27.101721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.101728 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:27.101738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:27.101748 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:27.168764 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:27.168785 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:27.168797 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:27.233485 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:27.233505 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:27.269682 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:27.269699 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:27.328866 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:27.328887 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:29.845088 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:29.855320 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:29.855384 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:29.880133 1261197 cri.go:89] found id: ""
	I1217 00:54:29.880147 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.880156 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:29.880162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:29.880233 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:29.905055 1261197 cri.go:89] found id: ""
	I1217 00:54:29.905070 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.905078 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:29.905083 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:29.905141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:29.931379 1261197 cri.go:89] found id: ""
	I1217 00:54:29.931393 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.931400 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:29.931404 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:29.931465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:29.957268 1261197 cri.go:89] found id: ""
	I1217 00:54:29.957283 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.957290 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:29.957296 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:29.957360 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:29.982289 1261197 cri.go:89] found id: ""
	I1217 00:54:29.982303 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.982311 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:29.982316 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:29.982375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:30.024866 1261197 cri.go:89] found id: ""
	I1217 00:54:30.024883 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.024891 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:30.024898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:30.024973 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:30.071833 1261197 cri.go:89] found id: ""
	I1217 00:54:30.071852 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.071861 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:30.071877 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:30.071891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:30.147472 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:30.147484 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:30.147497 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:30.211213 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:30.211235 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:30.240355 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:30.240371 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:30.299743 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:30.299761 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:32.815023 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:32.824966 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:32.825040 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:32.849786 1261197 cri.go:89] found id: ""
	I1217 00:54:32.849799 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.849806 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:32.849812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:32.849875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:32.875478 1261197 cri.go:89] found id: ""
	I1217 00:54:32.875491 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.875498 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:32.875503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:32.875563 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:32.899514 1261197 cri.go:89] found id: ""
	I1217 00:54:32.899528 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.899534 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:32.899539 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:32.899601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:32.923962 1261197 cri.go:89] found id: ""
	I1217 00:54:32.923977 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.923984 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:32.923990 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:32.924067 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:32.948671 1261197 cri.go:89] found id: ""
	I1217 00:54:32.948685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.948692 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:32.948697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:32.948753 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:32.973420 1261197 cri.go:89] found id: ""
	I1217 00:54:32.973434 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.973440 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:32.973446 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:32.973505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:32.997981 1261197 cri.go:89] found id: ""
	I1217 00:54:32.997996 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.998003 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:32.998010 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:32.998020 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:33.055157 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:33.055177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:33.070286 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:33.070306 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:33.136931 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:33.136941 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:33.136952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:33.199432 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:33.199453 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:35.728077 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:35.738194 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:35.738256 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:35.763154 1261197 cri.go:89] found id: ""
	I1217 00:54:35.763169 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.763176 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:35.763182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:35.763238 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:35.787668 1261197 cri.go:89] found id: ""
	I1217 00:54:35.787682 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.787689 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:35.787695 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:35.787751 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:35.811854 1261197 cri.go:89] found id: ""
	I1217 00:54:35.811868 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.811884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:35.811890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:35.811961 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:35.836579 1261197 cri.go:89] found id: ""
	I1217 00:54:35.836594 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.836601 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:35.836607 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:35.836684 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:35.861837 1261197 cri.go:89] found id: ""
	I1217 00:54:35.861851 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.861858 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:35.861863 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:35.861921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:35.886709 1261197 cri.go:89] found id: ""
	I1217 00:54:35.886723 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.886730 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:35.886736 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:35.886792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:35.910235 1261197 cri.go:89] found id: ""
	I1217 00:54:35.910248 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.910255 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:35.910275 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:35.910285 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:35.966535 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:35.966553 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:35.981143 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:35.981169 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:36.045220 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:36.045231 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:36.045241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:36.106277 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:36.106296 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:38.637781 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:38.649664 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:38.649725 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:38.691238 1261197 cri.go:89] found id: ""
	I1217 00:54:38.691252 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.691259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:38.691264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:38.691322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:38.716035 1261197 cri.go:89] found id: ""
	I1217 00:54:38.716049 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.716055 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:38.716066 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:38.716125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:38.740603 1261197 cri.go:89] found id: ""
	I1217 00:54:38.740616 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.740624 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:38.740629 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:38.740687 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:38.766239 1261197 cri.go:89] found id: ""
	I1217 00:54:38.766253 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.766260 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:38.766266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:38.766324 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:38.791492 1261197 cri.go:89] found id: ""
	I1217 00:54:38.791506 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.791513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:38.791519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:38.791579 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:38.816435 1261197 cri.go:89] found id: ""
	I1217 00:54:38.816449 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.816456 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:38.816461 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:38.816520 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:38.841085 1261197 cri.go:89] found id: ""
	I1217 00:54:38.841099 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.841107 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:38.841114 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:38.841124 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:38.896837 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:38.896856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:38.911640 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:38.911658 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:38.976373 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:38.976383 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:38.976393 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:39.037751 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:39.037771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.567032 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:41.578116 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:41.578182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:41.603748 1261197 cri.go:89] found id: ""
	I1217 00:54:41.603762 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.603770 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:41.603775 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:41.603833 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:41.634998 1261197 cri.go:89] found id: ""
	I1217 00:54:41.635012 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.635019 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:41.635024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:41.635080 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:41.678283 1261197 cri.go:89] found id: ""
	I1217 00:54:41.678297 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.678307 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:41.678312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:41.678375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:41.704945 1261197 cri.go:89] found id: ""
	I1217 00:54:41.704960 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.704967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:41.704977 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:41.705035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:41.729909 1261197 cri.go:89] found id: ""
	I1217 00:54:41.729923 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.729930 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:41.729936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:41.730019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:41.754648 1261197 cri.go:89] found id: ""
	I1217 00:54:41.754662 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.754669 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:41.754675 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:41.754734 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:41.779433 1261197 cri.go:89] found id: ""
	I1217 00:54:41.779448 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.779455 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:41.779463 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:41.779474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:41.793989 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:41.794006 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:41.858584 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:41.858594 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:41.858605 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:41.923655 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:41.923682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.950619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:41.950638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:44.507762 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:44.517733 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:44.517793 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:44.541892 1261197 cri.go:89] found id: ""
	I1217 00:54:44.541905 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.541924 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:44.541929 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:44.541986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:44.570803 1261197 cri.go:89] found id: ""
	I1217 00:54:44.570818 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.570824 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:44.570830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:44.570889 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:44.599324 1261197 cri.go:89] found id: ""
	I1217 00:54:44.599338 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.599345 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:44.599351 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:44.599412 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:44.632615 1261197 cri.go:89] found id: ""
	I1217 00:54:44.632629 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.632637 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:44.632643 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:44.632705 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:44.659976 1261197 cri.go:89] found id: ""
	I1217 00:54:44.659989 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.660009 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:44.660015 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:44.660085 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:44.688987 1261197 cri.go:89] found id: ""
	I1217 00:54:44.689000 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.689007 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:44.689013 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:44.689069 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:44.712988 1261197 cri.go:89] found id: ""
	I1217 00:54:44.713002 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.713010 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:44.713018 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:44.713030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:44.727473 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:44.727489 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:44.794008 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:44.794021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:44.794031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:44.855600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:44.855621 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:44.883007 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:44.883023 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.442293 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:47.452401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:47.452465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:47.475940 1261197 cri.go:89] found id: ""
	I1217 00:54:47.475953 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.475960 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:47.475965 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:47.476021 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:47.500287 1261197 cri.go:89] found id: ""
	I1217 00:54:47.500302 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.500309 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:47.500314 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:47.500371 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:47.537066 1261197 cri.go:89] found id: ""
	I1217 00:54:47.537080 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.537087 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:47.537091 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:47.537147 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:47.561363 1261197 cri.go:89] found id: ""
	I1217 00:54:47.561377 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.561384 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:47.561390 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:47.561446 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:47.586917 1261197 cri.go:89] found id: ""
	I1217 00:54:47.586931 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.586939 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:47.586944 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:47.587006 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:47.611775 1261197 cri.go:89] found id: ""
	I1217 00:54:47.611789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.611796 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:47.611805 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:47.611862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:47.648123 1261197 cri.go:89] found id: ""
	I1217 00:54:47.648137 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.648145 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:47.648152 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:47.648163 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.716428 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:47.716447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:47.732842 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:47.732876 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:47.801539 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:47.801549 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:47.801559 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:47.863256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:47.863276 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:50.394435 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:50.404927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:50.404986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:50.429607 1261197 cri.go:89] found id: ""
	I1217 00:54:50.429621 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.429628 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:50.429634 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:50.429731 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:50.454601 1261197 cri.go:89] found id: ""
	I1217 00:54:50.454615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.454622 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:50.454627 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:50.454689 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:50.484855 1261197 cri.go:89] found id: ""
	I1217 00:54:50.484877 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.484884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:50.484890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:50.484950 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:50.510003 1261197 cri.go:89] found id: ""
	I1217 00:54:50.510018 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.510025 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:50.510030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:50.510089 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:50.533511 1261197 cri.go:89] found id: ""
	I1217 00:54:50.533525 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.533532 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:50.533537 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:50.533602 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:50.558386 1261197 cri.go:89] found id: ""
	I1217 00:54:50.558400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.558407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:50.558419 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:50.558476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:50.587409 1261197 cri.go:89] found id: ""
	I1217 00:54:50.587422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.587429 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:50.587437 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:50.587447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:50.644042 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:50.644061 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:50.661242 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:50.661257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:50.732592 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:50.732602 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:50.732613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:50.793447 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:50.793466 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:53.322439 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:53.332470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:53.332535 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:53.357094 1261197 cri.go:89] found id: ""
	I1217 00:54:53.357108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.357116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:53.357121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:53.357182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:53.381629 1261197 cri.go:89] found id: ""
	I1217 00:54:53.381667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.381674 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:53.381679 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:53.381743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:53.407630 1261197 cri.go:89] found id: ""
	I1217 00:54:53.407644 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.407651 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:53.407656 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:53.407718 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:53.435972 1261197 cri.go:89] found id: ""
	I1217 00:54:53.435986 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.435993 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:53.435999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:53.436059 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:53.461545 1261197 cri.go:89] found id: ""
	I1217 00:54:53.461558 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.461565 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:53.461570 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:53.461629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:53.491744 1261197 cri.go:89] found id: ""
	I1217 00:54:53.491758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.491766 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:53.491771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:53.491836 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:53.517147 1261197 cri.go:89] found id: ""
	I1217 00:54:53.517161 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.517170 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:53.517177 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:53.517188 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:53.573158 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:53.573177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:53.588088 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:53.588104 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:53.665911 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:53.665933 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:53.665945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:53.735506 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:53.735530 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.268624 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:56.279995 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:56.280060 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:56.304847 1261197 cri.go:89] found id: ""
	I1217 00:54:56.304874 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.304881 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:56.304887 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:56.304952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:56.329820 1261197 cri.go:89] found id: ""
	I1217 00:54:56.329834 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.329841 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:56.329846 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:56.329902 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:56.354667 1261197 cri.go:89] found id: ""
	I1217 00:54:56.354685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.354695 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:56.354700 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:56.354779 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:56.383823 1261197 cri.go:89] found id: ""
	I1217 00:54:56.383837 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.383844 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:56.383850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:56.383907 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:56.408219 1261197 cri.go:89] found id: ""
	I1217 00:54:56.408233 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.408240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:56.408246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:56.408305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:56.433745 1261197 cri.go:89] found id: ""
	I1217 00:54:56.433758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.433765 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:56.433771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:56.433843 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:56.458631 1261197 cri.go:89] found id: ""
	I1217 00:54:56.458645 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.458653 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:56.458660 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:56.458671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:56.473217 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:56.473233 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:56.540570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:56.540579 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:56.540591 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:56.605775 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:56.605795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.659436 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:56.659452 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.225973 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:59.236165 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:59.236223 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:59.262172 1261197 cri.go:89] found id: ""
	I1217 00:54:59.262185 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.262193 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:59.262198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:59.262254 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:59.286403 1261197 cri.go:89] found id: ""
	I1217 00:54:59.286417 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.286425 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:59.286430 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:59.286489 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:59.311254 1261197 cri.go:89] found id: ""
	I1217 00:54:59.311268 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.311276 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:59.311280 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:59.311336 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:59.339495 1261197 cri.go:89] found id: ""
	I1217 00:54:59.339510 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.339519 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:59.339524 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:59.339583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:59.364038 1261197 cri.go:89] found id: ""
	I1217 00:54:59.364052 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.364068 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:59.364074 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:59.364130 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:59.388359 1261197 cri.go:89] found id: ""
	I1217 00:54:59.388373 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.388391 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:59.388396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:59.388462 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:59.412775 1261197 cri.go:89] found id: ""
	I1217 00:54:59.412789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.412806 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:59.412815 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:59.412824 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:59.475190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:59.475211 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:59.504917 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:59.504933 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.561462 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:59.561481 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:59.576156 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:59.576171 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:59.641179 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.141436 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:02.152012 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:02.152075 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:02.180949 1261197 cri.go:89] found id: ""
	I1217 00:55:02.180963 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.180970 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:02.180976 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:02.181046 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:02.204892 1261197 cri.go:89] found id: ""
	I1217 00:55:02.204915 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.204922 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:02.204928 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:02.205035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:02.230226 1261197 cri.go:89] found id: ""
	I1217 00:55:02.230239 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.230247 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:02.230252 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:02.230309 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:02.254922 1261197 cri.go:89] found id: ""
	I1217 00:55:02.254936 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.254944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:02.254949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:02.255012 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:02.279652 1261197 cri.go:89] found id: ""
	I1217 00:55:02.279666 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.279673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:02.279678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:02.279737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:02.306126 1261197 cri.go:89] found id: ""
	I1217 00:55:02.306139 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.306146 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:02.306152 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:02.306209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:02.330968 1261197 cri.go:89] found id: ""
	I1217 00:55:02.330982 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.330989 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:02.330997 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:02.331007 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:02.386453 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:02.386473 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:02.401019 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:02.401036 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:02.462681 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.462691 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:02.462701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:02.523460 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:02.523480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.051274 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:05.061850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:05.061924 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:05.087077 1261197 cri.go:89] found id: ""
	I1217 00:55:05.087092 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.087099 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:05.087105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:05.087167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:05.113592 1261197 cri.go:89] found id: ""
	I1217 00:55:05.113607 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.113614 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:05.113620 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:05.113702 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:05.139004 1261197 cri.go:89] found id: ""
	I1217 00:55:05.139019 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.139026 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:05.139031 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:05.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:05.163703 1261197 cri.go:89] found id: ""
	I1217 00:55:05.163717 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.163725 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:05.163731 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:05.163791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:05.188990 1261197 cri.go:89] found id: ""
	I1217 00:55:05.189004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.189011 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:05.189024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:05.189083 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:05.218147 1261197 cri.go:89] found id: ""
	I1217 00:55:05.218161 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.218168 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:05.218174 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:05.218246 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:05.242561 1261197 cri.go:89] found id: ""
	I1217 00:55:05.242575 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.242592 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:05.242600 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:05.242610 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:05.303683 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:05.303701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.331484 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:05.331499 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:05.392845 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:05.392868 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:05.407882 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:05.407898 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:05.474193 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:07.974416 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:07.984527 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:07.984588 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:08.011706 1261197 cri.go:89] found id: ""
	I1217 00:55:08.011722 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.011730 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:08.011735 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:08.011803 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:08.038984 1261197 cri.go:89] found id: ""
	I1217 00:55:08.038998 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.039005 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:08.039011 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:08.039072 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:08.066839 1261197 cri.go:89] found id: ""
	I1217 00:55:08.066854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.066861 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:08.066866 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:08.066928 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:08.096940 1261197 cri.go:89] found id: ""
	I1217 00:55:08.096954 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.096962 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:08.096968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:08.097026 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:08.124219 1261197 cri.go:89] found id: ""
	I1217 00:55:08.124232 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.124240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:08.124245 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:08.124308 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:08.149339 1261197 cri.go:89] found id: ""
	I1217 00:55:08.149353 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.149360 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:08.149365 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:08.149424 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:08.173327 1261197 cri.go:89] found id: ""
	I1217 00:55:08.173350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.173358 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:08.173366 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:08.173376 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:08.229871 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:08.229891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:08.244853 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:08.244877 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:08.312062 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:08.312072 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:08.312082 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:08.373219 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:08.373238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:10.901813 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:10.913062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:10.913131 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:10.939973 1261197 cri.go:89] found id: ""
	I1217 00:55:10.939987 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.939994 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:10.939999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:10.940057 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:10.965488 1261197 cri.go:89] found id: ""
	I1217 00:55:10.965502 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.965509 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:10.965514 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:10.965574 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:10.990743 1261197 cri.go:89] found id: ""
	I1217 00:55:10.990758 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.990766 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:10.990772 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:10.990851 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:11.017298 1261197 cri.go:89] found id: ""
	I1217 00:55:11.017322 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.017330 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:11.017336 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:11.017405 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:11.043148 1261197 cri.go:89] found id: ""
	I1217 00:55:11.043163 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.043170 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:11.043175 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:11.043236 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:11.070182 1261197 cri.go:89] found id: ""
	I1217 00:55:11.070196 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.070207 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:11.070213 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:11.070284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:11.098403 1261197 cri.go:89] found id: ""
	I1217 00:55:11.098419 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.098426 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:11.098434 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:11.098445 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:11.154712 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:11.154732 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:11.171447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:11.171469 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:11.235332 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:11.235344 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:11.235354 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:11.298591 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:11.298611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:13.826200 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:13.836246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:13.836303 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:13.861099 1261197 cri.go:89] found id: ""
	I1217 00:55:13.861113 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.861120 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:13.861125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:13.861183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:13.898315 1261197 cri.go:89] found id: ""
	I1217 00:55:13.898328 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.898335 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:13.898340 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:13.898403 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:13.927870 1261197 cri.go:89] found id: ""
	I1217 00:55:13.927884 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.927902 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:13.927908 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:13.927986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:13.956407 1261197 cri.go:89] found id: ""
	I1217 00:55:13.956421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.956428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:13.956433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:13.956500 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:13.981521 1261197 cri.go:89] found id: ""
	I1217 00:55:13.981553 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.981560 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:13.981565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:13.981630 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:14.007326 1261197 cri.go:89] found id: ""
	I1217 00:55:14.007350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.007358 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:14.007364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:14.007433 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:14.034794 1261197 cri.go:89] found id: ""
	I1217 00:55:14.034809 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.034816 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:14.034824 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:14.034835 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:14.091355 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:14.091375 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:14.106561 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:14.106579 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:14.176400 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:14.176410 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:14.176420 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:14.242568 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:14.242593 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:16.776330 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:16.786496 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:16.786558 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:16.811486 1261197 cri.go:89] found id: ""
	I1217 00:55:16.811500 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.811507 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:16.811512 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:16.811576 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:16.839885 1261197 cri.go:89] found id: ""
	I1217 00:55:16.839898 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.839905 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:16.839910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:16.839972 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:16.865332 1261197 cri.go:89] found id: ""
	I1217 00:55:16.865346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.865353 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:16.865359 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:16.865419 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:16.904044 1261197 cri.go:89] found id: ""
	I1217 00:55:16.904058 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.904065 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:16.904071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:16.904133 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:16.934495 1261197 cri.go:89] found id: ""
	I1217 00:55:16.934508 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.934515 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:16.934521 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:16.934582 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:16.959038 1261197 cri.go:89] found id: ""
	I1217 00:55:16.959052 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.959060 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:16.959065 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:16.959123 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:16.987609 1261197 cri.go:89] found id: ""
	I1217 00:55:16.987622 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.987630 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:16.987637 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:16.987647 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:17.046635 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:17.046655 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:17.062321 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:17.062345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:17.130440 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:17.130450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:17.130460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:17.192501 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:17.192521 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:19.724677 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:19.736386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:19.736459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:19.763100 1261197 cri.go:89] found id: ""
	I1217 00:55:19.763114 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.763121 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:19.763127 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:19.763185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:19.791470 1261197 cri.go:89] found id: ""
	I1217 00:55:19.791483 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.791490 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:19.791495 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:19.791552 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:19.816395 1261197 cri.go:89] found id: ""
	I1217 00:55:19.816410 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.816417 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:19.816422 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:19.816482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:19.841971 1261197 cri.go:89] found id: ""
	I1217 00:55:19.841984 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.841991 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:19.841997 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:19.842058 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:19.866385 1261197 cri.go:89] found id: ""
	I1217 00:55:19.866399 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.866406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:19.866411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:19.866468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:19.904121 1261197 cri.go:89] found id: ""
	I1217 00:55:19.904135 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.904153 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:19.904160 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:19.904217 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:19.940290 1261197 cri.go:89] found id: ""
	I1217 00:55:19.940304 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.940311 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:19.940319 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:19.940329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:19.955177 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:19.955193 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:20.024806 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:20.024817 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:20.024830 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:20.088972 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:20.088996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:20.122058 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:20.122075 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:22.679929 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:22.690102 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:22.690162 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:22.717462 1261197 cri.go:89] found id: ""
	I1217 00:55:22.717476 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.717483 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:22.717489 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:22.717550 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:22.744363 1261197 cri.go:89] found id: ""
	I1217 00:55:22.744377 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.744390 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:22.744395 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:22.744454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:22.770975 1261197 cri.go:89] found id: ""
	I1217 00:55:22.770989 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.770996 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:22.771001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:22.771068 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:22.795702 1261197 cri.go:89] found id: ""
	I1217 00:55:22.795716 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.795724 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:22.795729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:22.795787 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:22.820186 1261197 cri.go:89] found id: ""
	I1217 00:55:22.820200 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.820206 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:22.820212 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:22.820269 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:22.844518 1261197 cri.go:89] found id: ""
	I1217 00:55:22.844533 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.844540 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:22.844545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:22.844604 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:22.884821 1261197 cri.go:89] found id: ""
	I1217 00:55:22.884834 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.884841 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:22.884849 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:22.884860 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:22.901504 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:22.901520 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:22.975115 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:22.975125 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:22.975135 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:23.036546 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:23.036566 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:23.070681 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:23.070697 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.627462 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:25.638109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:25.638168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:25.671791 1261197 cri.go:89] found id: ""
	I1217 00:55:25.671806 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.671813 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:25.671821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:25.671884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:25.696990 1261197 cri.go:89] found id: ""
	I1217 00:55:25.697004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.697011 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:25.697016 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:25.697082 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:25.722087 1261197 cri.go:89] found id: ""
	I1217 00:55:25.722101 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.722110 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:25.722115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:25.722184 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:25.747407 1261197 cri.go:89] found id: ""
	I1217 00:55:25.747421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.747428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:25.747433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:25.747495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:25.772602 1261197 cri.go:89] found id: ""
	I1217 00:55:25.772617 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.772623 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:25.772628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:25.772694 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:25.802452 1261197 cri.go:89] found id: ""
	I1217 00:55:25.802466 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.802473 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:25.802478 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:25.802538 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:25.827066 1261197 cri.go:89] found id: ""
	I1217 00:55:25.827081 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.827088 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:25.827096 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:25.827109 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.886656 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:25.886676 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:25.903090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:25.903108 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:25.973568 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:25.973578 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:25.973587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:26.036642 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:26.036662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:28.571573 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:28.581543 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:28.581601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:28.607202 1261197 cri.go:89] found id: ""
	I1217 00:55:28.607216 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.607224 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:28.607229 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:28.607288 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:28.630842 1261197 cri.go:89] found id: ""
	I1217 00:55:28.630857 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.630864 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:28.630869 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:28.630927 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:28.656052 1261197 cri.go:89] found id: ""
	I1217 00:55:28.656066 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.656073 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:28.656079 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:28.656135 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:28.680008 1261197 cri.go:89] found id: ""
	I1217 00:55:28.680022 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.680029 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:28.680034 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:28.680104 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:28.704668 1261197 cri.go:89] found id: ""
	I1217 00:55:28.704682 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.704689 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:28.704694 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:28.704756 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:28.733961 1261197 cri.go:89] found id: ""
	I1217 00:55:28.733974 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.733981 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:28.733986 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:28.734042 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:28.759990 1261197 cri.go:89] found id: ""
	I1217 00:55:28.760005 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.760013 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:28.760021 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:28.760030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:28.815642 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:28.815661 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:28.830313 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:28.830333 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:28.907265 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:28.907287 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:28.907299 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:28.978223 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:28.978244 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:31.508374 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:31.518631 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:31.518696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:31.543672 1261197 cri.go:89] found id: ""
	I1217 00:55:31.543686 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.543693 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:31.543701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:31.543760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:31.568914 1261197 cri.go:89] found id: ""
	I1217 00:55:31.568929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.568944 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:31.568949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:31.569017 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:31.593432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.593453 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.593461 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:31.593466 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:31.593537 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:31.619217 1261197 cri.go:89] found id: ""
	I1217 00:55:31.619231 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.619238 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:31.619243 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:31.619299 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:31.647432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.647445 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.647453 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:31.647458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:31.647522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:31.675117 1261197 cri.go:89] found id: ""
	I1217 00:55:31.675130 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.675138 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:31.675143 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:31.675200 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:31.698973 1261197 cri.go:89] found id: ""
	I1217 00:55:31.698986 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.698993 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:31.699001 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:31.699010 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:31.754429 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:31.754447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:31.768968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:31.768984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:31.831791 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:31.831801 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:31.831811 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:31.900759 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:31.900777 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:34.429727 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:34.440562 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:34.440629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:34.465412 1261197 cri.go:89] found id: ""
	I1217 00:55:34.465425 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.465433 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:34.465438 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:34.465496 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:34.489937 1261197 cri.go:89] found id: ""
	I1217 00:55:34.489951 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.489978 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:34.489987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:34.490055 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:34.520581 1261197 cri.go:89] found id: ""
	I1217 00:55:34.520602 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.520610 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:34.520615 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:34.520682 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:34.547718 1261197 cri.go:89] found id: ""
	I1217 00:55:34.547732 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.547739 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:34.547744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:34.547806 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:34.572103 1261197 cri.go:89] found id: ""
	I1217 00:55:34.572116 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.572133 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:34.572138 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:34.572209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:34.600789 1261197 cri.go:89] found id: ""
	I1217 00:55:34.600819 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.600827 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:34.600832 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:34.600921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:34.627220 1261197 cri.go:89] found id: ""
	I1217 00:55:34.627234 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.627240 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:34.627248 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:34.627257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:34.682307 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:34.682327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:34.697255 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:34.697271 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:34.764504 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:34.764515 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:34.764525 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:34.826010 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:34.826029 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.353119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:37.363135 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:37.363198 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:37.387754 1261197 cri.go:89] found id: ""
	I1217 00:55:37.387773 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.387781 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:37.387787 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:37.387845 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:37.413391 1261197 cri.go:89] found id: ""
	I1217 00:55:37.413404 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.413411 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:37.413417 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:37.413474 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:37.439523 1261197 cri.go:89] found id: ""
	I1217 00:55:37.439537 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.439544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:37.439549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:37.439607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:37.469209 1261197 cri.go:89] found id: ""
	I1217 00:55:37.469223 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.469230 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:37.469235 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:37.469296 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:37.495794 1261197 cri.go:89] found id: ""
	I1217 00:55:37.495807 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.495814 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:37.495819 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:37.495875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:37.520612 1261197 cri.go:89] found id: ""
	I1217 00:55:37.520625 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.520642 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:37.520648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:37.520720 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:37.547269 1261197 cri.go:89] found id: ""
	I1217 00:55:37.547283 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.547290 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:37.547299 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:37.547308 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:37.608835 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:37.608856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.635364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:37.635383 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:37.694966 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:37.694984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:37.709746 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:37.709763 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:37.775515 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.277182 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:40.287332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:40.287393 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:40.315852 1261197 cri.go:89] found id: ""
	I1217 00:55:40.315866 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.315873 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:40.315879 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:40.315936 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:40.340196 1261197 cri.go:89] found id: ""
	I1217 00:55:40.340210 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.340217 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:40.340222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:40.340279 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:40.365794 1261197 cri.go:89] found id: ""
	I1217 00:55:40.365815 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.365823 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:40.365828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:40.365899 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:40.391466 1261197 cri.go:89] found id: ""
	I1217 00:55:40.391480 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.391488 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:40.391493 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:40.391553 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:40.420286 1261197 cri.go:89] found id: ""
	I1217 00:55:40.420300 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.420307 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:40.420312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:40.420373 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:40.449247 1261197 cri.go:89] found id: ""
	I1217 00:55:40.449261 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.449268 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:40.449274 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:40.449331 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:40.474951 1261197 cri.go:89] found id: ""
	I1217 00:55:40.474965 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.474972 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:40.474980 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:40.474990 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:40.540502 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.540513 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:40.540524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:40.602747 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:40.602766 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:40.629888 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:40.629904 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:40.686174 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:40.686191 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.201825 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:43.212126 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:43.212185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:43.237088 1261197 cri.go:89] found id: ""
	I1217 00:55:43.237109 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.237115 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:43.237121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:43.237183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:43.262148 1261197 cri.go:89] found id: ""
	I1217 00:55:43.262162 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.262177 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:43.262182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:43.262239 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:43.286264 1261197 cri.go:89] found id: ""
	I1217 00:55:43.286278 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.286285 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:43.286290 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:43.286346 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:43.310644 1261197 cri.go:89] found id: ""
	I1217 00:55:43.310657 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.310664 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:43.310670 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:43.310730 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:43.335131 1261197 cri.go:89] found id: ""
	I1217 00:55:43.335146 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.335153 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:43.335158 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:43.335220 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:43.364301 1261197 cri.go:89] found id: ""
	I1217 00:55:43.364315 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.364323 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:43.364331 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:43.364390 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:43.391204 1261197 cri.go:89] found id: ""
	I1217 00:55:43.391218 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.391225 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:43.391233 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:43.391252 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:43.450751 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:43.450771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.466709 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:43.466726 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:43.533713 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:43.533723 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:43.533734 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:43.601250 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:43.601269 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.134875 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:46.146399 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:46.146468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:46.179014 1261197 cri.go:89] found id: ""
	I1217 00:55:46.179028 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.179044 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:46.179050 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:46.179115 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:46.208346 1261197 cri.go:89] found id: ""
	I1217 00:55:46.208360 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.208377 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:46.208383 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:46.208441 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:46.233331 1261197 cri.go:89] found id: ""
	I1217 00:55:46.233346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.233361 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:46.233367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:46.233423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:46.259330 1261197 cri.go:89] found id: ""
	I1217 00:55:46.259344 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.259351 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:46.259357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:46.259413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:46.283871 1261197 cri.go:89] found id: ""
	I1217 00:55:46.283885 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.283902 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:46.283907 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:46.283975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:46.308301 1261197 cri.go:89] found id: ""
	I1217 00:55:46.308316 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.308331 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:46.308337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:46.308397 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:46.332677 1261197 cri.go:89] found id: ""
	I1217 00:55:46.332691 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.332699 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:46.332706 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:46.332716 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:46.347830 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:46.347846 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:46.413688 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:46.413699 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:46.413709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:46.475238 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:46.475260 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.502692 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:46.502708 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.063356 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:49.074298 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:49.074364 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:49.102541 1261197 cri.go:89] found id: ""
	I1217 00:55:49.102555 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.102562 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:49.102567 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:49.102625 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:49.132690 1261197 cri.go:89] found id: ""
	I1217 00:55:49.132706 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.132713 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:49.132718 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:49.132780 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:49.159962 1261197 cri.go:89] found id: ""
	I1217 00:55:49.159976 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.159983 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:49.159987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:49.160047 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:49.186672 1261197 cri.go:89] found id: ""
	I1217 00:55:49.186685 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.186692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:49.186703 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:49.186760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:49.215488 1261197 cri.go:89] found id: ""
	I1217 00:55:49.215506 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.215513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:49.215518 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:49.215594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:49.243652 1261197 cri.go:89] found id: ""
	I1217 00:55:49.243667 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.243674 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:49.243680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:49.243746 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:49.271745 1261197 cri.go:89] found id: ""
	I1217 00:55:49.271762 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.271769 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:49.271777 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:49.271789 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:49.305614 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:49.305638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.361396 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:49.361414 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:49.377081 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:49.377097 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:49.448394 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:49.448405 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:49.448416 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.014619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:52.025272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:52.025334 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:52.050179 1261197 cri.go:89] found id: ""
	I1217 00:55:52.050193 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.050201 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:52.050206 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:52.050267 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:52.075171 1261197 cri.go:89] found id: ""
	I1217 00:55:52.075186 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.075193 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:52.075198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:52.075258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:52.100730 1261197 cri.go:89] found id: ""
	I1217 00:55:52.100745 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.100752 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:52.100758 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:52.100819 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:52.139001 1261197 cri.go:89] found id: ""
	I1217 00:55:52.139016 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.139023 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:52.139028 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:52.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:52.167837 1261197 cri.go:89] found id: ""
	I1217 00:55:52.167854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.167861 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:52.167876 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:52.167939 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:52.195893 1261197 cri.go:89] found id: ""
	I1217 00:55:52.195907 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.195914 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:52.195919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:52.195986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:52.226474 1261197 cri.go:89] found id: ""
	I1217 00:55:52.226489 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.226496 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:52.226504 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:52.226514 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:52.283106 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:52.283125 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:52.298214 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:52.298230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:52.368183 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:52.368194 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:52.368205 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.430851 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:52.430873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:54.962672 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.972814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:54.972874 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:54.998554 1261197 cri.go:89] found id: ""
	I1217 00:55:54.998568 1261197 logs.go:282] 0 containers: []
	W1217 00:55:54.998575 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:54.998580 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:54.998640 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:55.027159 1261197 cri.go:89] found id: ""
	I1217 00:55:55.027174 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.027181 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:55.027187 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:55.027258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:55.057204 1261197 cri.go:89] found id: ""
	I1217 00:55:55.057219 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.057226 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:55.057241 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:55.057302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:55.082858 1261197 cri.go:89] found id: ""
	I1217 00:55:55.082872 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.082880 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:55.082885 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:55.082952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:55.108074 1261197 cri.go:89] found id: ""
	I1217 00:55:55.108088 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.108095 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:55.108100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:55.108168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:55.142170 1261197 cri.go:89] found id: ""
	I1217 00:55:55.142184 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.142204 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:55.142210 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:55.142277 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:55.174306 1261197 cri.go:89] found id: ""
	I1217 00:55:55.174333 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.174341 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:55.174349 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:55.174361 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:55.234605 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:55.234625 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:55.249756 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:55.249773 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:55.312439 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:55.312450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:55.312460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:55.373256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:55.373275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:57.900997 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.911464 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:57.911522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:57.936082 1261197 cri.go:89] found id: ""
	I1217 00:55:57.936096 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.936104 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:57.936115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:57.936172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:57.960175 1261197 cri.go:89] found id: ""
	I1217 00:55:57.960190 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.960197 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:57.960202 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:57.960266 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:57.986658 1261197 cri.go:89] found id: ""
	I1217 00:55:57.986671 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.986678 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:57.986684 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:57.986743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:58.012944 1261197 cri.go:89] found id: ""
	I1217 00:55:58.012959 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.012967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:58.012973 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:58.013035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:58.041226 1261197 cri.go:89] found id: ""
	I1217 00:55:58.041241 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.041248 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:58.041253 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:58.041319 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:58.066914 1261197 cri.go:89] found id: ""
	I1217 00:55:58.066929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.066937 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:58.066943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:58.067000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:58.090571 1261197 cri.go:89] found id: ""
	I1217 00:55:58.090586 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.090593 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:58.090601 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:58.090611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:58.161546 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:58.161556 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:58.161578 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:58.230111 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:58.230131 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:58.259134 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:58.259150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:58.315698 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:58.315715 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:00.831924 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.842106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:00.842166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:00.868037 1261197 cri.go:89] found id: ""
	I1217 00:56:00.868051 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.868057 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:00.868062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:00.868138 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:00.893020 1261197 cri.go:89] found id: ""
	I1217 00:56:00.893046 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.893053 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:00.893059 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:00.893125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:00.918054 1261197 cri.go:89] found id: ""
	I1217 00:56:00.918068 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.918075 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:00.918081 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:00.918139 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:00.947584 1261197 cri.go:89] found id: ""
	I1217 00:56:00.947599 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.947607 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:00.947612 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:00.947675 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:00.974913 1261197 cri.go:89] found id: ""
	I1217 00:56:00.974929 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.974936 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:00.974941 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:00.975000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:00.998262 1261197 cri.go:89] found id: ""
	I1217 00:56:00.998276 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.998284 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:00.998289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:00.998345 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:01.025055 1261197 cri.go:89] found id: ""
	I1217 00:56:01.025071 1261197 logs.go:282] 0 containers: []
	W1217 00:56:01.025079 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:01.025099 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:01.025110 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:01.080854 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:01.080873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:01.095680 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:01.095698 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:01.174559 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:01.174574 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:01.174587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:01.240953 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:01.240973 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:03.778460 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.788536 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:03.788601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:03.816065 1261197 cri.go:89] found id: ""
	I1217 00:56:03.816080 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.816087 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:03.816093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:03.816158 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:03.840359 1261197 cri.go:89] found id: ""
	I1217 00:56:03.840373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.840381 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:03.840386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:03.840443 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:03.865338 1261197 cri.go:89] found id: ""
	I1217 00:56:03.865351 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.865359 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:03.865364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:03.865421 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:03.889916 1261197 cri.go:89] found id: ""
	I1217 00:56:03.889930 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.889937 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:03.889943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:03.890011 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:03.913782 1261197 cri.go:89] found id: ""
	I1217 00:56:03.913796 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.913804 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:03.913815 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:03.913875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:03.938356 1261197 cri.go:89] found id: ""
	I1217 00:56:03.938371 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.938379 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:03.938385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:03.938447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:03.963432 1261197 cri.go:89] found id: ""
	I1217 00:56:03.963446 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.963454 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:03.963461 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:03.963474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:04.024730 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:04.024752 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:04.057316 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:04.057331 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:04.115813 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:04.115832 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:04.133889 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:04.133905 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:04.212782 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.713766 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.723767 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:06.723837 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:06.747547 1261197 cri.go:89] found id: ""
	I1217 00:56:06.747561 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.747568 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:06.747574 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:06.747632 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:06.772850 1261197 cri.go:89] found id: ""
	I1217 00:56:06.772864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.772871 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:06.772877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:06.772942 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:06.797087 1261197 cri.go:89] found id: ""
	I1217 00:56:06.797101 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.797108 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:06.797113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:06.797171 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:06.821815 1261197 cri.go:89] found id: ""
	I1217 00:56:06.821829 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.821836 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:06.821842 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:06.821906 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:06.850207 1261197 cri.go:89] found id: ""
	I1217 00:56:06.850221 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.850229 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:06.850234 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:06.850294 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:06.874139 1261197 cri.go:89] found id: ""
	I1217 00:56:06.874153 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.874160 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:06.874166 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:06.874224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:06.899438 1261197 cri.go:89] found id: ""
	I1217 00:56:06.899453 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.899461 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:06.899469 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:06.899480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:06.967530 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.967542 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:06.967554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:07.030281 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:07.030301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:07.062210 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:07.062226 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:07.121373 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:07.121391 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:09.638141 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.648301 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:09.648359 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:09.672936 1261197 cri.go:89] found id: ""
	I1217 00:56:09.672951 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.672959 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:09.672964 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:09.673022 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:09.697500 1261197 cri.go:89] found id: ""
	I1217 00:56:09.697513 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.697520 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:09.697526 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:09.697583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:09.723330 1261197 cri.go:89] found id: ""
	I1217 00:56:09.723344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.723352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:09.723360 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:09.723423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:09.747017 1261197 cri.go:89] found id: ""
	I1217 00:56:09.747032 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.747039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:09.747044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:09.747100 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:09.771652 1261197 cri.go:89] found id: ""
	I1217 00:56:09.771666 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.771673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:09.771678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:09.771737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:09.799785 1261197 cri.go:89] found id: ""
	I1217 00:56:09.799799 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.799807 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:09.799812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:09.799871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:09.827063 1261197 cri.go:89] found id: ""
	I1217 00:56:09.827077 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.827085 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:09.827093 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:09.827103 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:09.894392 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:09.894403 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:09.894413 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:09.955961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:09.955981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:09.982364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:09.982380 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:10.051689 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:10.051709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.568963 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.579001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:12.579065 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:12.603247 1261197 cri.go:89] found id: ""
	I1217 00:56:12.603261 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.603269 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:12.603275 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:12.603332 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:12.635591 1261197 cri.go:89] found id: ""
	I1217 00:56:12.635606 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.635612 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:12.635617 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:12.635676 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:12.659802 1261197 cri.go:89] found id: ""
	I1217 00:56:12.659817 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.659824 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:12.659830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:12.659887 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:12.684671 1261197 cri.go:89] found id: ""
	I1217 00:56:12.684684 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.684692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:12.684697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:12.684766 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:12.712570 1261197 cri.go:89] found id: ""
	I1217 00:56:12.712584 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.712606 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:12.712611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:12.712668 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:12.739330 1261197 cri.go:89] found id: ""
	I1217 00:56:12.739345 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.739353 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:12.739358 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:12.739416 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:12.767372 1261197 cri.go:89] found id: ""
	I1217 00:56:12.767386 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.767393 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:12.767401 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:12.767411 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:12.822789 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:12.822807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.839685 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:12.839702 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:12.916219 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:12.916230 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:12.916241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:12.977800 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:12.977820 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.507621 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.518177 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:15.518240 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:15.544777 1261197 cri.go:89] found id: ""
	I1217 00:56:15.544792 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.544800 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:15.544806 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:15.544864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:15.569420 1261197 cri.go:89] found id: ""
	I1217 00:56:15.569433 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.569441 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:15.569447 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:15.569505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:15.594329 1261197 cri.go:89] found id: ""
	I1217 00:56:15.594344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.594352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:15.594357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:15.594417 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:15.619820 1261197 cri.go:89] found id: ""
	I1217 00:56:15.619834 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.619842 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:15.619847 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:15.619911 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:15.645055 1261197 cri.go:89] found id: ""
	I1217 00:56:15.645076 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.645084 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:15.645090 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:15.645152 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:15.671575 1261197 cri.go:89] found id: ""
	I1217 00:56:15.671590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.671597 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:15.671602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:15.671667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:15.700941 1261197 cri.go:89] found id: ""
	I1217 00:56:15.700955 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.700963 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:15.700971 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:15.700980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.728886 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:15.728931 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:15.784718 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:15.784736 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:15.799312 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:15.799335 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:15.865192 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:15.865203 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:15.865214 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.428562 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.438711 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:18.438772 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:18.465045 1261197 cri.go:89] found id: ""
	I1217 00:56:18.465060 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.465067 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:18.465073 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:18.465132 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:18.490715 1261197 cri.go:89] found id: ""
	I1217 00:56:18.490728 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.490736 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:18.490741 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:18.490799 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:18.519522 1261197 cri.go:89] found id: ""
	I1217 00:56:18.519536 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.519544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:18.519549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:18.519611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:18.545098 1261197 cri.go:89] found id: ""
	I1217 00:56:18.545112 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.545119 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:18.545125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:18.545183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:18.570978 1261197 cri.go:89] found id: ""
	I1217 00:56:18.570993 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.571000 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:18.571005 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:18.571063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:18.594800 1261197 cri.go:89] found id: ""
	I1217 00:56:18.594814 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.594822 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:18.594828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:18.594884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:18.618575 1261197 cri.go:89] found id: ""
	I1217 00:56:18.618589 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.618597 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:18.618604 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:18.618613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.680474 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:18.680494 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:18.708635 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:18.708651 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:18.763927 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:18.763949 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:18.780209 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:18.780225 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:18.849998 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.351687 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.362159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:21.362230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:21.390614 1261197 cri.go:89] found id: ""
	I1217 00:56:21.390630 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.390637 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:21.390648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:21.390716 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:21.420609 1261197 cri.go:89] found id: ""
	I1217 00:56:21.420623 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.420630 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:21.420636 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:21.420703 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:21.446943 1261197 cri.go:89] found id: ""
	I1217 00:56:21.446957 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.446964 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:21.446970 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:21.447041 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:21.477813 1261197 cri.go:89] found id: ""
	I1217 00:56:21.477828 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.477835 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:21.477841 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:21.477901 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:21.504024 1261197 cri.go:89] found id: ""
	I1217 00:56:21.504058 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.504065 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:21.504071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:21.504150 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:21.534132 1261197 cri.go:89] found id: ""
	I1217 00:56:21.534146 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.534154 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:21.534159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:21.534222 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:21.558094 1261197 cri.go:89] found id: ""
	I1217 00:56:21.558113 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.558122 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:21.558130 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:21.558141 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:21.620436 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:21.620462 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:21.635283 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:21.635301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:21.698118 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.698128 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:21.698139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:21.760016 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:21.760037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.289952 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.300354 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:24.300457 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:24.324823 1261197 cri.go:89] found id: ""
	I1217 00:56:24.324838 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.324846 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:24.324852 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:24.324921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:24.349508 1261197 cri.go:89] found id: ""
	I1217 00:56:24.349522 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.349528 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:24.349534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:24.349592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:24.375701 1261197 cri.go:89] found id: ""
	I1217 00:56:24.375716 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.375723 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:24.375729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:24.375791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:24.412359 1261197 cri.go:89] found id: ""
	I1217 00:56:24.412373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.412380 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:24.412385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:24.412447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:24.440423 1261197 cri.go:89] found id: ""
	I1217 00:56:24.440437 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.440444 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:24.440450 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:24.440511 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:24.471294 1261197 cri.go:89] found id: ""
	I1217 00:56:24.471308 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.471316 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:24.471322 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:24.471391 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:24.496845 1261197 cri.go:89] found id: ""
	I1217 00:56:24.496859 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.496866 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:24.496874 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:24.496892 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.526610 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:24.526627 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:24.583266 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:24.583327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:24.598272 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:24.598288 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:24.660553 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:24.660563 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:24.660574 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:27.222739 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.232603 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:27.232662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:27.259034 1261197 cri.go:89] found id: ""
	I1217 00:56:27.259048 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.259056 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:27.259061 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:27.259122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:27.282406 1261197 cri.go:89] found id: ""
	I1217 00:56:27.282420 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.282427 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:27.282432 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:27.282490 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:27.306518 1261197 cri.go:89] found id: ""
	I1217 00:56:27.306532 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.306540 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:27.306545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:27.306603 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:27.335278 1261197 cri.go:89] found id: ""
	I1217 00:56:27.335292 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.335299 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:27.335305 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:27.335363 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:27.359793 1261197 cri.go:89] found id: ""
	I1217 00:56:27.359808 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.359815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:27.359829 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:27.359888 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:27.399251 1261197 cri.go:89] found id: ""
	I1217 00:56:27.399275 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.399283 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:27.399289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:27.399355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:27.426464 1261197 cri.go:89] found id: ""
	I1217 00:56:27.426477 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.426495 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:27.426503 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:27.426513 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:27.458980 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:27.458996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:27.514403 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:27.514424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:27.528951 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:27.528969 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:27.592165 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:27.592175 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:27.592187 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.157841 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.168783 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:30.168847 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:30.194237 1261197 cri.go:89] found id: ""
	I1217 00:56:30.194251 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.194259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:30.194264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:30.194329 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:30.220057 1261197 cri.go:89] found id: ""
	I1217 00:56:30.220072 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.220079 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:30.220084 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:30.220141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:30.244965 1261197 cri.go:89] found id: ""
	I1217 00:56:30.244980 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.244987 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:30.244992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:30.245051 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:30.269893 1261197 cri.go:89] found id: ""
	I1217 00:56:30.269907 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.269914 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:30.269919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:30.269976 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:30.294384 1261197 cri.go:89] found id: ""
	I1217 00:56:30.294398 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.294406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:30.294411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:30.294469 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:30.325240 1261197 cri.go:89] found id: ""
	I1217 00:56:30.325254 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.325261 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:30.325266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:30.325322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:30.349591 1261197 cri.go:89] found id: ""
	I1217 00:56:30.349604 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.349611 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:30.349619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:30.349629 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:30.409349 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:30.409368 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:30.426814 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:30.426833 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:30.497852 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:30.497861 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:30.497872 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.559124 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:30.559146 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.090237 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.100535 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:33.100594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:33.124070 1261197 cri.go:89] found id: ""
	I1217 00:56:33.124085 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.124092 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:33.124098 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:33.124155 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:33.148807 1261197 cri.go:89] found id: ""
	I1217 00:56:33.148821 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.148828 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:33.148833 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:33.148894 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:33.175576 1261197 cri.go:89] found id: ""
	I1217 00:56:33.175590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.175597 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:33.175602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:33.175660 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:33.200012 1261197 cri.go:89] found id: ""
	I1217 00:56:33.200026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.200033 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:33.200038 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:33.200095 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:33.224891 1261197 cri.go:89] found id: ""
	I1217 00:56:33.224921 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.224928 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:33.224933 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:33.225001 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:33.249021 1261197 cri.go:89] found id: ""
	I1217 00:56:33.249035 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.249043 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:33.249052 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:33.249108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:33.272696 1261197 cri.go:89] found id: ""
	I1217 00:56:33.272710 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.272717 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:33.272733 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:33.272743 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:33.333826 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:33.333848 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.363111 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:33.363134 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:33.426200 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:33.426219 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:33.444135 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:33.444152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:33.510910 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.011142 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.023140 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:36.023216 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:36.051599 1261197 cri.go:89] found id: ""
	I1217 00:56:36.051614 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.051622 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:36.051628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:36.051700 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:36.076217 1261197 cri.go:89] found id: ""
	I1217 00:56:36.076231 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.076239 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:36.076244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:36.076305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:36.104998 1261197 cri.go:89] found id: ""
	I1217 00:56:36.105026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.105034 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:36.105039 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:36.105108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:36.130127 1261197 cri.go:89] found id: ""
	I1217 00:56:36.130142 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.130149 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:36.130154 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:36.130224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:36.155615 1261197 cri.go:89] found id: ""
	I1217 00:56:36.155629 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.155636 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:36.155648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:36.155709 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:36.181850 1261197 cri.go:89] found id: ""
	I1217 00:56:36.181864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.181872 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:36.181877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:36.181937 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:36.208111 1261197 cri.go:89] found id: ""
	I1217 00:56:36.208126 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.208133 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:36.208141 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:36.208152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:36.266007 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:36.266031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:36.281259 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:36.281275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:36.346325 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.346335 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:36.346345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:36.412961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:36.412981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:38.945107 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.955445 1261197 kubeadm.go:602] duration metric: took 4m3.371937848s to restartPrimaryControlPlane
	W1217 00:56:38.955509 1261197 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:56:38.955586 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 00:56:39.375604 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:56:39.388977 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:56:39.396884 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:56:39.396954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:56:39.404783 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:56:39.404792 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 00:56:39.404853 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:56:39.412686 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:56:39.412740 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:56:39.420350 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:56:39.427923 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:56:39.427975 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:56:39.435272 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.442721 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:56:39.442775 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.450389 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:56:39.458043 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:56:39.458098 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:56:39.465332 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:56:39.508240 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:56:39.508300 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:56:39.586995 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:56:39.587071 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:56:39.587116 1261197 kubeadm.go:319] OS: Linux
	I1217 00:56:39.587161 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:56:39.587217 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:56:39.587273 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:56:39.587330 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:56:39.587376 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:56:39.587433 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:56:39.587488 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:56:39.587544 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:56:39.587589 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:56:39.658303 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:56:39.658422 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:56:39.658518 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:56:39.670076 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:56:39.675448 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 00:56:39.675545 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:56:39.675618 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:56:39.675704 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:56:39.675774 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:56:39.675852 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:56:39.675914 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:56:39.675983 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:56:39.676053 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:56:39.676144 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:56:39.676224 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:56:39.676260 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:56:39.676329 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:56:39.801204 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:56:39.954898 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:56:40.065909 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:56:40.451062 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:56:40.596539 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:56:40.597062 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:56:40.600429 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:56:40.603602 1261197 out.go:252]   - Booting up control plane ...
	I1217 00:56:40.603714 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:56:40.603797 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:56:40.604963 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:56:40.625747 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:56:40.625851 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:56:40.633757 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:56:40.634255 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:56:40.634396 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:56:40.778162 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:56:40.778280 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:00:40.776324 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243331s
	I1217 01:00:40.776348 1261197 kubeadm.go:319] 
	I1217 01:00:40.776405 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:00:40.776437 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:00:40.776540 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:00:40.776544 1261197 kubeadm.go:319] 
	I1217 01:00:40.776648 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:00:40.776679 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:00:40.776709 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:00:40.776712 1261197 kubeadm.go:319] 
	I1217 01:00:40.780629 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:00:40.781051 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:00:40.781158 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:00:40.781394 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:00:40.781398 1261197 kubeadm.go:319] 
	I1217 01:00:40.781466 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:00:40.781578 1261197 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243331s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:00:40.781696 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:00:41.195061 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:00:41.209438 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:00:41.209493 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:00:41.218235 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:00:41.218244 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 01:00:41.218300 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:00:41.226394 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:00:41.226448 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:00:41.234445 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:00:41.242558 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:00:41.242613 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:00:41.250526 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.258573 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:00:41.258634 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.266278 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:00:41.274420 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:00:41.274476 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:00:41.281748 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:00:41.319491 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:00:41.319792 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:00:41.392691 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:00:41.392755 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:00:41.392789 1261197 kubeadm.go:319] OS: Linux
	I1217 01:00:41.392833 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:00:41.392880 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:00:41.392926 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:00:41.392972 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:00:41.393025 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:00:41.393072 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:00:41.393116 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:00:41.393163 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:00:41.393208 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:00:41.471655 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:00:41.471787 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:00:41.471905 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:00:41.482138 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:00:41.485739 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 01:00:41.485837 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:00:41.485905 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:00:41.485986 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:00:41.486050 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:00:41.486123 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:00:41.486180 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:00:41.486253 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:00:41.486318 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:00:41.486396 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:00:41.486478 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:00:41.486522 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:00:41.486584 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:00:41.603323 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:00:41.901106 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:00:42.054265 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:00:42.414109 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:00:42.682518 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:00:42.683180 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:00:42.685848 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:00:42.689217 1261197 out.go:252]   - Booting up control plane ...
	I1217 01:00:42.689317 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:00:42.689401 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:00:42.689468 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:00:42.713083 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:00:42.713185 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:00:42.721813 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:00:42.722110 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:00:42.722158 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:00:42.862014 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:00:42.862133 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:04:42.862018 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284909s
	I1217 01:04:42.862056 1261197 kubeadm.go:319] 
	I1217 01:04:42.862124 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:04:42.862167 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:04:42.862279 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:04:42.862283 1261197 kubeadm.go:319] 
	I1217 01:04:42.862390 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:04:42.862421 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:04:42.862451 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:04:42.862457 1261197 kubeadm.go:319] 
	I1217 01:04:42.866725 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:04:42.867116 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:04:42.867218 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:04:42.867438 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:04:42.867443 1261197 kubeadm.go:319] 
	I1217 01:04:42.867507 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:04:42.867593 1261197 kubeadm.go:403] duration metric: took 12m7.31765155s to StartCluster
	I1217 01:04:42.867623 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:04:42.867685 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:04:42.892141 1261197 cri.go:89] found id: ""
	I1217 01:04:42.892155 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.892162 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:04:42.892167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:04:42.892231 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:04:42.916795 1261197 cri.go:89] found id: ""
	I1217 01:04:42.916809 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.916817 1261197 logs.go:284] No container was found matching "etcd"
	I1217 01:04:42.916822 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:04:42.916879 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:04:42.945762 1261197 cri.go:89] found id: ""
	I1217 01:04:42.945776 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.945783 1261197 logs.go:284] No container was found matching "coredns"
	I1217 01:04:42.945794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:04:42.945850 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:04:42.970080 1261197 cri.go:89] found id: ""
	I1217 01:04:42.970094 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.970100 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:04:42.970105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:04:42.970161 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:04:42.994293 1261197 cri.go:89] found id: ""
	I1217 01:04:42.994307 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.994314 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:04:42.994319 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:04:42.994375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:04:43.019856 1261197 cri.go:89] found id: ""
	I1217 01:04:43.019871 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.019879 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:04:43.019884 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:04:43.019980 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:04:43.044643 1261197 cri.go:89] found id: ""
	I1217 01:04:43.044657 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.044664 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 01:04:43.044672 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 01:04:43.044682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:04:43.100644 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 01:04:43.100662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:04:43.115507 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:04:43.115524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:04:43.206420 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:04:43.206430 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 01:04:43.206440 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:04:43.268190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 01:04:43.268210 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 01:04:43.298717 1261197 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:04:43.298758 1261197 out.go:285] * 
	W1217 01:04:43.298817 1261197 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.298838 1261197 out.go:285] * 
	W1217 01:04:43.301057 1261197 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:04:43.305981 1261197 out.go:203] 
	W1217 01:04:43.308777 1261197 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.308838 1261197 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:04:43.308858 1261197 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:04:43.311954 1261197 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243334749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243430323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243566916Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243654818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243723127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243793503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243862870Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243933976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244147632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244278333Z" level=info msg="Connect containerd service"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244760505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.246010456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.254958867Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255148908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255207460Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255280454Z" level=info msg="Start recovering state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.295825702Z" level=info msg="Start event monitor"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296048071Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296114033Z" level=info msg="Start streaming server"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296179503Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296236685Z" level=info msg="runtime interface starting up..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296301301Z" level=info msg="starting plugins..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296367492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296572451Z" level=info msg="containerd successfully booted in 0.086094s"
	Dec 17 00:52:34 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:06:51.352822   23100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:51.353259   23100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:51.354744   23100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:51.355075   23100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:51.356524   23100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:06:51 up  6:49,  0 user,  load average: 0.51, 0.28, 0.50
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:06:48 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 17 01:06:49 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:49 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:49 functional-608344 kubelet[22943]: E1217 01:06:49.187817   22943 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 17 01:06:49 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:49 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:49 functional-608344 kubelet[22980]: E1217 01:06:49.931607   22980 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:49 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:50 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 17 01:06:50 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:50 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:50 functional-608344 kubelet[23016]: E1217 01:06:50.682232   23016 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:50 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:50 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 17 01:06:51 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:51 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:51 functional-608344 kubelet[23104]: E1217 01:06:51.425161   23104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:51 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (370.875105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-608344 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-608344 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (64.975987ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-608344 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-608344 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-608344 describe po hello-node-connect: exit status 1 (66.433234ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-608344 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-608344 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-608344 logs -l app=hello-node-connect: exit status 1 (59.541202ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-608344 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-608344 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-608344 describe svc hello-node-connect: exit status 1 (58.223829ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-608344 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (311.143411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-608344 cache delete minikube-local-cache-test:functional-608344                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh     │ functional-608344 ssh sudo crictl images                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh     │ functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh     │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ cache   │ functional-608344 cache reload                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ ssh     │ functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │ 17 Dec 25 00:52 UTC │
	│ kubectl │ functional-608344 kubectl -- --context functional-608344 get pods                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ start   │ -p functional-608344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-608344 ssh echo hello                                                                         │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ config  │ functional-608344 config unset cpus                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ config  │ functional-608344 config get cpus                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │                     │
	│ config  │ functional-608344 config set cpus 2                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ config  │ functional-608344 config get cpus                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ config  │ functional-608344 config unset cpus                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ ssh     │ functional-608344 ssh cat /etc/hostname                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │ 17 Dec 25 01:04 UTC │
	│ config  │ functional-608344 config get cpus                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │                     │
	│ tunnel  │ functional-608344 tunnel --alsologtostderr                                                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │                     │
	│ tunnel  │ functional-608344 tunnel --alsologtostderr                                                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │                     │
	│ tunnel  │ functional-608344 tunnel --alsologtostderr                                                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:04 UTC │                     │
	│ addons  │ functional-608344 addons list                                                                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ addons  │ functional-608344 addons list -o json                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:52:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:52:31.527617 1261197 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:52:31.527758 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527763 1261197 out.go:374] Setting ErrFile to fd 2...
	I1217 00:52:31.527767 1261197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:52:31.527997 1261197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:52:31.528338 1261197 out.go:368] Setting JSON to false
	I1217 00:52:31.529124 1261197 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23702,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:52:31.529179 1261197 start.go:143] virtualization:  
	I1217 00:52:31.532534 1261197 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:52:31.537145 1261197 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:52:31.537272 1261197 notify.go:221] Checking for updates...
	I1217 00:52:31.542910 1261197 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:52:31.545800 1261197 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:52:31.548609 1261197 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:52:31.551556 1261197 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:52:31.554346 1261197 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:52:31.557970 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:31.558066 1261197 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:52:31.587498 1261197 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:52:31.587608 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.650823 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.641966313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.650910 1261197 docker.go:319] overlay module found
	I1217 00:52:31.653844 1261197 out.go:179] * Using the docker driver based on existing profile
	I1217 00:52:31.656662 1261197 start.go:309] selected driver: docker
	I1217 00:52:31.656669 1261197 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.656773 1261197 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:52:31.656888 1261197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:52:31.710052 1261197 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:52:31.70077893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:52:31.710641 1261197 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:52:31.710676 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:31.710788 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:31.710847 1261197 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:31.713993 1261197 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
	I1217 00:52:31.716755 1261197 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:52:31.719575 1261197 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:52:31.722367 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:31.722402 1261197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:52:31.722423 1261197 cache.go:65] Caching tarball of preloaded images
	I1217 00:52:31.722451 1261197 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:52:31.722505 1261197 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 00:52:31.722513 1261197 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 00:52:31.722616 1261197 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
	I1217 00:52:31.740561 1261197 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:52:31.740571 1261197 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:52:31.740584 1261197 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:52:31.740613 1261197 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:52:31.740665 1261197 start.go:364] duration metric: took 37.006µs to acquireMachinesLock for "functional-608344"
	I1217 00:52:31.740682 1261197 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:52:31.740687 1261197 fix.go:54] fixHost starting: 
	I1217 00:52:31.740957 1261197 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
	I1217 00:52:31.756910 1261197 fix.go:112] recreateIfNeeded on functional-608344: state=Running err=<nil>
	W1217 00:52:31.756929 1261197 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:52:31.760018 1261197 out.go:252] * Updating the running docker "functional-608344" container ...
	I1217 00:52:31.760042 1261197 machine.go:94] provisionDockerMachine start ...
	I1217 00:52:31.760119 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.776640 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.776960 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.776966 1261197 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:52:31.905356 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:31.905370 1261197 ubuntu.go:182] provisioning hostname "functional-608344"
	I1217 00:52:31.905445 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:31.925834 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:31.926164 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:31.926177 1261197 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
	I1217 00:52:32.067014 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
	
	I1217 00:52:32.067088 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.084172 1261197 main.go:143] libmachine: Using SSH client type: native
	I1217 00:52:32.084485 1261197 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I1217 00:52:32.084499 1261197 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:52:32.214216 1261197 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:52:32.214232 1261197 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 00:52:32.214253 1261197 ubuntu.go:190] setting up certificates
	I1217 00:52:32.214268 1261197 provision.go:84] configureAuth start
	I1217 00:52:32.214325 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:32.232515 1261197 provision.go:143] copyHostCerts
	I1217 00:52:32.232580 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 00:52:32.232588 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 00:52:32.232671 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 00:52:32.232772 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 00:52:32.232776 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 00:52:32.232801 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 00:52:32.232878 1261197 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 00:52:32.232885 1261197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 00:52:32.232913 1261197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 00:52:32.232967 1261197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
	I1217 00:52:32.616759 1261197 provision.go:177] copyRemoteCerts
	I1217 00:52:32.616824 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:52:32.616864 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.638193 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.737540 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:52:32.755258 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:52:32.772709 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:52:32.791423 1261197 provision.go:87] duration metric: took 577.141949ms to configureAuth
	I1217 00:52:32.791441 1261197 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:52:32.791635 1261197 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 00:52:32.791640 1261197 machine.go:97] duration metric: took 1.031594088s to provisionDockerMachine
	I1217 00:52:32.791646 1261197 start.go:293] postStartSetup for "functional-608344" (driver="docker")
	I1217 00:52:32.791656 1261197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:52:32.791701 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:52:32.791750 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.809559 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:32.905557 1261197 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:52:32.908787 1261197 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:52:32.908827 1261197 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:52:32.908837 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 00:52:32.908891 1261197 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 00:52:32.908975 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 00:52:32.909048 1261197 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
	I1217 00:52:32.909089 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
	I1217 00:52:32.916399 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:32.933317 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
	I1217 00:52:32.950047 1261197 start.go:296] duration metric: took 158.386583ms for postStartSetup
	I1217 00:52:32.950118 1261197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:52:32.950170 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:32.968857 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.062653 1261197 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:52:33.067278 1261197 fix.go:56] duration metric: took 1.32658398s for fixHost
	I1217 00:52:33.067294 1261197 start.go:83] releasing machines lock for "functional-608344", held for 1.326621929s
	I1217 00:52:33.067361 1261197 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
	I1217 00:52:33.084000 1261197 ssh_runner.go:195] Run: cat /version.json
	I1217 00:52:33.084040 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.084288 1261197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:52:33.084348 1261197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
	I1217 00:52:33.108566 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.111371 1261197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
	I1217 00:52:33.289488 1261197 ssh_runner.go:195] Run: systemctl --version
	I1217 00:52:33.296034 1261197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:52:33.300233 1261197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:52:33.300292 1261197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:52:33.307943 1261197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:52:33.307957 1261197 start.go:496] detecting cgroup driver to use...
	I1217 00:52:33.307988 1261197 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:52:33.308034 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 00:52:33.325973 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:52:33.341243 1261197 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:52:33.341313 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:52:33.357700 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:52:33.373469 1261197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:52:33.498827 1261197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:52:33.614529 1261197 docker.go:234] disabling docker service ...
	I1217 00:52:33.614598 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:52:33.629592 1261197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:52:33.642692 1261197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:52:33.771770 1261197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:52:33.894226 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:52:33.907337 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:52:33.922634 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:52:33.932171 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:52:33.941438 1261197 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:52:33.941508 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:52:33.950063 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.958782 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:52:33.967078 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:52:33.975466 1261197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:52:33.983339 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:52:33.991895 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:52:34.000351 1261197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:52:34.010891 1261197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:52:34.018879 1261197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:52:34.026594 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.150165 1261197 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:52:34.299897 1261197 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 00:52:34.299958 1261197 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 00:52:34.303895 1261197 start.go:564] Will wait 60s for crictl version
	I1217 00:52:34.303948 1261197 ssh_runner.go:195] Run: which crictl
	I1217 00:52:34.307381 1261197 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:52:34.334814 1261197 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 00:52:34.334888 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.355644 1261197 ssh_runner.go:195] Run: containerd --version
	I1217 00:52:34.381331 1261197 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 00:52:34.384165 1261197 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:52:34.399831 1261197 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:52:34.407243 1261197 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:52:34.410160 1261197 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:52:34.410312 1261197 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 00:52:34.410394 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.434882 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.434894 1261197 containerd.go:534] Images already preloaded, skipping extraction
	I1217 00:52:34.434955 1261197 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:52:34.460154 1261197 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 00:52:34.460166 1261197 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:52:34.460173 1261197 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1217 00:52:34.460276 1261197 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:52:34.460340 1261197 ssh_runner.go:195] Run: sudo crictl info
	I1217 00:52:34.485418 1261197 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:52:34.485440 1261197 cni.go:84] Creating CNI manager for ""
	I1217 00:52:34.485447 1261197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:52:34.485462 1261197 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:52:34.485483 1261197 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:52:34.485591 1261197 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:52:34.485688 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:52:34.493475 1261197 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:52:34.493536 1261197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:52:34.501738 1261197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 00:52:34.515117 1261197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:52:34.528350 1261197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1217 00:52:34.541325 1261197 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:52:34.545027 1261197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:52:34.663222 1261197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:52:34.871198 1261197 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
	I1217 00:52:34.871209 1261197 certs.go:195] generating shared ca certs ...
	I1217 00:52:34.871223 1261197 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:52:34.871350 1261197 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 00:52:34.871405 1261197 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 00:52:34.871411 1261197 certs.go:257] generating profile certs ...
	I1217 00:52:34.871503 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
	I1217 00:52:34.871558 1261197 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
	I1217 00:52:34.871595 1261197 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
	I1217 00:52:34.871710 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 00:52:34.871738 1261197 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 00:52:34.871746 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:52:34.871770 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:52:34.871791 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:52:34.871819 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 00:52:34.871867 1261197 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 00:52:34.872533 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:52:34.890674 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:52:34.908252 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:52:34.925752 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 00:52:34.942982 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:52:34.961072 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:52:34.978793 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:52:34.995794 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:52:35.016106 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:52:35.035474 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 00:52:35.054248 1261197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 00:52:35.072025 1261197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:52:35.085836 1261197 ssh_runner.go:195] Run: openssl version
	I1217 00:52:35.092498 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.100138 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:52:35.107992 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111748 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.111805 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:52:35.153206 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:52:35.161118 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.168560 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 00:52:35.176276 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180431 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.180496 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 00:52:35.224274 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:52:35.231870 1261197 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.239209 1261197 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 00:52:35.246988 1261197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250581 1261197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.250708 1261197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 00:52:35.291833 1261197 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:52:35.299197 1261197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:52:35.302994 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:52:35.343876 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:52:35.384935 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:52:35.425945 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:52:35.468160 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:52:35.509040 1261197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:52:35.549950 1261197 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:52:35.550030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 00:52:35.550101 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.575493 1261197 cri.go:89] found id: ""
	I1217 00:52:35.575551 1261197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:52:35.583488 1261197 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:52:35.583498 1261197 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:52:35.583562 1261197 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:52:35.590939 1261197 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.591435 1261197 kubeconfig.go:125] found "functional-608344" server: "https://192.168.49.2:8441"
	I1217 00:52:35.592674 1261197 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:52:35.600478 1261197 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:38:00.276726971 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:52:34.535031442 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:52:35.600490 1261197 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:52:35.600503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 00:52:35.600556 1261197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:52:35.635394 1261197 cri.go:89] found id: ""
	I1217 00:52:35.635452 1261197 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:52:35.655954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:52:35.664843 1261197 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 00:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 00:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 17 00:42 /etc/kubernetes/scheduler.conf
	
	I1217 00:52:35.664920 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:52:35.673926 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:52:35.681783 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.681837 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:52:35.689482 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.698370 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.698438 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:52:35.705988 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:52:35.714414 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:52:35.714484 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:52:35.722072 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:52:35.729848 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:35.776855 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.711300 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.926722 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:36.999232 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:52:37.047947 1261197 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:52:37.048019 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:37.548207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.048861 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:38.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.048206 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:39.548189 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.049366 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:40.548557 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.048152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:41.549106 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.048793 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:42.549138 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.049014 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:43.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.048840 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:44.548921 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.048979 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:45.549120 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.049193 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:46.548932 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.048207 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:47.548119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.048127 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:48.548295 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.049080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:49.548771 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.048210 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:50.548773 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.048258 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:51.549096 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.048188 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:52.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.049033 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:53.549038 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.048512 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:54.548619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.048253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:55.549044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.048294 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:56.548919 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.048218 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:57.548765 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:58.548855 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.048880 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:52:59.548221 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.048194 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:00.548710 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.048613 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:01.548834 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.049119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:02.548167 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.048599 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:03.549080 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.048587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:04.548846 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.048217 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:05.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.049020 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:06.548398 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.049097 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:07.548960 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.049065 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:08.548376 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.048388 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:09.548808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.048244 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:10.548239 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.049099 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:11.549083 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:12.549030 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.048350 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:13.548287 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.048923 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:14.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.048292 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:15.549092 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.048874 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:16.549144 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.048777 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:17.548153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.048868 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:18.548124 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.048936 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:19.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.048238 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:20.548216 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.048954 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:21.548662 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.049044 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:22.548942 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.048968 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:23.548787 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.048489 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:24.548243 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.048236 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:25.549178 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.048993 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:26.548676 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.049104 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:27.548930 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.048853 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:28.549118 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.048215 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:29.549153 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.048154 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:30.549126 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.048949 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:31.549114 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.048782 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:32.548760 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.048205 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:33.548209 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.049183 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:34.548231 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.049002 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:35.549031 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.048208 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:36.548852 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:37.048332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:37.048420 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:37.076924 1261197 cri.go:89] found id: ""
	I1217 00:53:37.076939 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.076947 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:37.076953 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:37.077010 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:37.103936 1261197 cri.go:89] found id: ""
	I1217 00:53:37.103950 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.103957 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:37.103962 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:37.104019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:37.134578 1261197 cri.go:89] found id: ""
	I1217 00:53:37.134592 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.134599 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:37.134605 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:37.134667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:37.162973 1261197 cri.go:89] found id: ""
	I1217 00:53:37.162986 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.162994 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:37.162999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:37.163063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:37.193768 1261197 cri.go:89] found id: ""
	I1217 00:53:37.193782 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.193789 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:37.193794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:37.193864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:37.217378 1261197 cri.go:89] found id: ""
	I1217 00:53:37.217391 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.217398 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:37.217403 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:37.217464 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:37.245938 1261197 cri.go:89] found id: ""
	I1217 00:53:37.245952 1261197 logs.go:282] 0 containers: []
	W1217 00:53:37.245959 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:37.245967 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:37.245977 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:37.303279 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:37.303297 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:37.317809 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:37.317826 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:37.378847 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:37.370318   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.371041   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.372823   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.373408   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:37.374931   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:37.378858 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:37.378870 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:37.440776 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:37.440795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:39.970536 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:39.980652 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:39.980714 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:40.014928 1261197 cri.go:89] found id: ""
	I1217 00:53:40.014943 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.014950 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:40.014956 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:40.015027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:40.044249 1261197 cri.go:89] found id: ""
	I1217 00:53:40.044284 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.044292 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:40.044299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:40.044375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:40.071071 1261197 cri.go:89] found id: ""
	I1217 00:53:40.071086 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.071094 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:40.071100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:40.071166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:40.096922 1261197 cri.go:89] found id: ""
	I1217 00:53:40.096936 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.096944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:40.096950 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:40.097019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:40.126209 1261197 cri.go:89] found id: ""
	I1217 00:53:40.126223 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.126231 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:40.126237 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:40.126302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:40.166443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.166457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.166465 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:40.166470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:40.166532 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:40.194443 1261197 cri.go:89] found id: ""
	I1217 00:53:40.194457 1261197 logs.go:282] 0 containers: []
	W1217 00:53:40.194465 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:40.194472 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:40.194483 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:40.249960 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:40.249980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:40.264714 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:40.264730 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:40.334158 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:40.324578   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.325886   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.326832   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.328497   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:40.329116   10910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:40.334168 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:40.334179 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:40.396176 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:40.396196 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:42.927525 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:42.939255 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:42.939317 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:42.967766 1261197 cri.go:89] found id: ""
	I1217 00:53:42.967780 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.967788 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:42.967793 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:42.967852 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:42.992216 1261197 cri.go:89] found id: ""
	I1217 00:53:42.992230 1261197 logs.go:282] 0 containers: []
	W1217 00:53:42.992238 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:42.992244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:42.992301 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:43.018174 1261197 cri.go:89] found id: ""
	I1217 00:53:43.018188 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.018196 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:43.018201 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:43.018260 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:43.043673 1261197 cri.go:89] found id: ""
	I1217 00:53:43.043687 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.043695 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:43.043701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:43.043763 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:43.067990 1261197 cri.go:89] found id: ""
	I1217 00:53:43.068005 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.068012 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:43.068017 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:43.068079 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:43.093908 1261197 cri.go:89] found id: ""
	I1217 00:53:43.093923 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.093930 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:43.093936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:43.093995 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:43.120199 1261197 cri.go:89] found id: ""
	I1217 00:53:43.120213 1261197 logs.go:282] 0 containers: []
	W1217 00:53:43.120220 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:43.120228 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:43.120238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:43.181971 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:43.181989 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:43.197524 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:43.197541 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:43.261336 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:43.252884   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254024   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.254524   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.255978   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:43.256451   11015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:43.261356 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:43.261366 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:43.322519 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:43.322538 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:45.852691 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:45.863769 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:45.863831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:45.888335 1261197 cri.go:89] found id: ""
	I1217 00:53:45.888350 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.888357 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:45.888363 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:45.888422 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:45.918194 1261197 cri.go:89] found id: ""
	I1217 00:53:45.918209 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.918216 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:45.918222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:45.918285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:45.943809 1261197 cri.go:89] found id: ""
	I1217 00:53:45.943824 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.943831 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:45.943836 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:45.943893 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:45.969167 1261197 cri.go:89] found id: ""
	I1217 00:53:45.969182 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.969189 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:45.969195 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:45.969261 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:45.995411 1261197 cri.go:89] found id: ""
	I1217 00:53:45.995425 1261197 logs.go:282] 0 containers: []
	W1217 00:53:45.995432 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:45.995437 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:45.995495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:46.025138 1261197 cri.go:89] found id: ""
	I1217 00:53:46.025153 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.025161 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:46.025167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:46.025230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:46.052563 1261197 cri.go:89] found id: ""
	I1217 00:53:46.052578 1261197 logs.go:282] 0 containers: []
	W1217 00:53:46.052585 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:46.052594 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:46.052604 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:46.110268 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:46.110286 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:46.128213 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:46.128230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:46.211985 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:46.203995   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.204533   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206153   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.206600   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:46.208173   11119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:46.212008 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:46.212018 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:46.274022 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:46.274041 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:48.809808 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:48.820115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:48.820172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:48.846046 1261197 cri.go:89] found id: ""
	I1217 00:53:48.846062 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.846069 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:48.846075 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:48.846145 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:48.871706 1261197 cri.go:89] found id: ""
	I1217 00:53:48.871721 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.871728 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:48.871734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:48.871794 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:48.896325 1261197 cri.go:89] found id: ""
	I1217 00:53:48.896341 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.896348 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:48.896353 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:48.896413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:48.922321 1261197 cri.go:89] found id: ""
	I1217 00:53:48.922335 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.922342 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:48.922348 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:48.922406 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:48.951311 1261197 cri.go:89] found id: ""
	I1217 00:53:48.951325 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.951332 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:48.951337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:48.951395 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:48.976196 1261197 cri.go:89] found id: ""
	I1217 00:53:48.976211 1261197 logs.go:282] 0 containers: []
	W1217 00:53:48.976218 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:48.976224 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:48.976285 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:49.005156 1261197 cri.go:89] found id: ""
	I1217 00:53:49.005173 1261197 logs.go:282] 0 containers: []
	W1217 00:53:49.005181 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:49.005190 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:49.005202 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:49.067318 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:49.067385 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:49.083407 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:49.083424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:49.159947 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:49.151768   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.152655   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154252   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.154556   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:49.156004   11217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:49.159958 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:49.159970 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:49.230934 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:49.230956 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:51.761379 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:51.771759 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:51.771821 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:51.796369 1261197 cri.go:89] found id: ""
	I1217 00:53:51.796384 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.796391 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:51.796396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:51.796454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:51.822318 1261197 cri.go:89] found id: ""
	I1217 00:53:51.822333 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.822340 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:51.822345 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:51.822409 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:51.847395 1261197 cri.go:89] found id: ""
	I1217 00:53:51.847409 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.847416 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:51.847421 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:51.847479 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:51.872529 1261197 cri.go:89] found id: ""
	I1217 00:53:51.872544 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.872552 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:51.872557 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:51.872619 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:51.900871 1261197 cri.go:89] found id: ""
	I1217 00:53:51.900885 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.900893 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:51.900898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:51.900967 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:51.928534 1261197 cri.go:89] found id: ""
	I1217 00:53:51.928548 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.928555 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:51.928560 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:51.928621 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:51.957597 1261197 cri.go:89] found id: ""
	I1217 00:53:51.957611 1261197 logs.go:282] 0 containers: []
	W1217 00:53:51.957619 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:51.957627 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:51.957636 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:52.016924 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:52.016945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:52.033440 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:52.033458 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:52.106352 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:52.097149   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.097956   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.099582   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.100150   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:52.101970   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:52.106373 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:52.106384 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:52.173915 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:52.173934 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:54.703159 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:54.713797 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:54.713862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:54.739273 1261197 cri.go:89] found id: ""
	I1217 00:53:54.739287 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.739294 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:54.739299 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:54.739355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:54.770340 1261197 cri.go:89] found id: ""
	I1217 00:53:54.770355 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.770362 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:54.770367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:54.770430 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:54.795583 1261197 cri.go:89] found id: ""
	I1217 00:53:54.795597 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.795604 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:54.795611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:54.795670 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:54.823673 1261197 cri.go:89] found id: ""
	I1217 00:53:54.823688 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.823696 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:54.823701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:54.823760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:54.849899 1261197 cri.go:89] found id: ""
	I1217 00:53:54.849913 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.849921 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:54.849927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:54.849986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:54.874746 1261197 cri.go:89] found id: ""
	I1217 00:53:54.874761 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.874767 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:54.874773 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:54.874831 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:54.898944 1261197 cri.go:89] found id: ""
	I1217 00:53:54.898961 1261197 logs.go:282] 0 containers: []
	W1217 00:53:54.898968 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:54.898975 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:54.898986 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:54.913535 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:54.913552 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:54.975130 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:54.966405   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.967135   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.968998   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.969596   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:54.971309   11430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:54.975140 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:54.975150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:55.037117 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:55.037139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:53:55.067838 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:55.067855 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.627174 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:53:57.637082 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:57.637153 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:57.661527 1261197 cri.go:89] found id: ""
	I1217 00:53:57.661541 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.661548 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:57.661553 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:53:57.661611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:57.685175 1261197 cri.go:89] found id: ""
	I1217 00:53:57.685189 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.685200 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:53:57.685205 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:53:57.685263 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:57.711702 1261197 cri.go:89] found id: ""
	I1217 00:53:57.711717 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.711724 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:53:57.711729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:57.711868 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:57.740036 1261197 cri.go:89] found id: ""
	I1217 00:53:57.740050 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.740058 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:57.740063 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:57.740122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:57.768675 1261197 cri.go:89] found id: ""
	I1217 00:53:57.768697 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.768704 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:57.768710 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:57.768775 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:57.792870 1261197 cri.go:89] found id: ""
	I1217 00:53:57.792883 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.792890 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:57.792895 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:57.792965 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:57.817001 1261197 cri.go:89] found id: ""
	I1217 00:53:57.817015 1261197 logs.go:282] 0 containers: []
	W1217 00:53:57.817022 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:57.817031 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:57.817053 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:57.871861 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:57.871881 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:57.886738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:57.886755 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:53:57.949301 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:53:57.941050   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.941766   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.943533   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.944114   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:53:57.945681   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:53:57.949319 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:53:57.949329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:53:58.010230 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:53:58.010249 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.540430 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:00.550751 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:00.550814 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:00.576488 1261197 cri.go:89] found id: ""
	I1217 00:54:00.576501 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.576510 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:00.576515 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:00.576573 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:00.601369 1261197 cri.go:89] found id: ""
	I1217 00:54:00.601383 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.601396 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:00.601401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:00.601459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:00.625632 1261197 cri.go:89] found id: ""
	I1217 00:54:00.625667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.625675 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:00.625680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:00.625738 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:00.651689 1261197 cri.go:89] found id: ""
	I1217 00:54:00.651703 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.651710 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:00.651715 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:00.651777 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:00.679744 1261197 cri.go:89] found id: ""
	I1217 00:54:00.679757 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.679765 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:00.679770 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:00.679828 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:00.709559 1261197 cri.go:89] found id: ""
	I1217 00:54:00.709573 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.709580 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:00.709585 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:00.709662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:00.734417 1261197 cri.go:89] found id: ""
	I1217 00:54:00.734432 1261197 logs.go:282] 0 containers: []
	W1217 00:54:00.734439 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:00.734447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:00.734457 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.797638 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.789408   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.790268   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.791808   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.792286   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.793856   11642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.797675 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:00.797685 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:00.859579 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.859598 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:00.885766 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:00.885783 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:00.946324 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:00.946344 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.461934 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:03.472673 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:03.472733 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:03.496966 1261197 cri.go:89] found id: ""
	I1217 00:54:03.496980 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.496987 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:03.496992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:03.497048 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:03.522192 1261197 cri.go:89] found id: ""
	I1217 00:54:03.522207 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.522214 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:03.522219 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:03.522280 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:03.547069 1261197 cri.go:89] found id: ""
	I1217 00:54:03.547083 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.547090 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:03.547095 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:03.547175 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:03.572136 1261197 cri.go:89] found id: ""
	I1217 00:54:03.572149 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.572156 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:03.572162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:03.572234 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:03.600755 1261197 cri.go:89] found id: ""
	I1217 00:54:03.600770 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.600782 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:03.600788 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:03.600859 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:03.629818 1261197 cri.go:89] found id: ""
	I1217 00:54:03.629836 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.629843 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:03.629849 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:03.629905 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:03.656769 1261197 cri.go:89] found id: ""
	I1217 00:54:03.656783 1261197 logs.go:282] 0 containers: []
	W1217 00:54:03.656790 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:03.656797 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:03.656807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:03.712292 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:03.712313 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:03.727502 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:03.727518 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:03.791668 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:03.782970   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.783616   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785323   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.785958   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:03.787552   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:03.791678 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:03.791688 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:03.854180 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:03.854200 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:06.381966 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:06.393097 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:06.393156 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:06.429087 1261197 cri.go:89] found id: ""
	I1217 00:54:06.429101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.429108 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:06.429113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:06.429189 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:06.454075 1261197 cri.go:89] found id: ""
	I1217 00:54:06.454091 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.454101 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:06.454106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:06.454179 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:06.478067 1261197 cri.go:89] found id: ""
	I1217 00:54:06.478081 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.478088 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:06.478093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:06.478149 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:06.503508 1261197 cri.go:89] found id: ""
	I1217 00:54:06.503522 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.503529 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:06.503534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:06.503592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:06.532125 1261197 cri.go:89] found id: ""
	I1217 00:54:06.532139 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.532146 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:06.532151 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:06.532218 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:06.557383 1261197 cri.go:89] found id: ""
	I1217 00:54:06.557397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.557404 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:06.557409 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:06.557482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:06.583086 1261197 cri.go:89] found id: ""
	I1217 00:54:06.583101 1261197 logs.go:282] 0 containers: []
	W1217 00:54:06.583109 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:06.583117 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:06.583128 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:06.638133 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:06.638153 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:06.652420 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:06.652439 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:06.715679 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:06.706907   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.707622   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709271   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.709877   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:06.711565   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:06.715692 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:06.715703 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:06.783529 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:06.783557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.314587 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:09.324947 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:09.325009 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:09.349922 1261197 cri.go:89] found id: ""
	I1217 00:54:09.349945 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.349952 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:09.349957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:09.350025 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:09.381538 1261197 cri.go:89] found id: ""
	I1217 00:54:09.381552 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.381560 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:09.381565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:09.381627 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:09.412584 1261197 cri.go:89] found id: ""
	I1217 00:54:09.412606 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.412613 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:09.412621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:09.412696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:09.446518 1261197 cri.go:89] found id: ""
	I1217 00:54:09.446533 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.446541 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:09.446547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:09.446620 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:09.477943 1261197 cri.go:89] found id: ""
	I1217 00:54:09.477956 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.477963 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:09.477968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:09.478027 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:09.503386 1261197 cri.go:89] found id: ""
	I1217 00:54:09.503400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.503407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:09.503413 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:09.503476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:09.528266 1261197 cri.go:89] found id: ""
	I1217 00:54:09.528292 1261197 logs.go:282] 0 containers: []
	W1217 00:54:09.528300 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:09.528308 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:09.528318 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:09.590766 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:09.590786 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:09.618540 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:09.618556 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:09.675017 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:09.675037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:09.689541 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:09.689557 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:09.753013 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:09.744768   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.745442   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747017   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.747521   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:09.749196   11982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.253253 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:12.263867 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:12.263926 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:12.289871 1261197 cri.go:89] found id: ""
	I1217 00:54:12.289888 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.289904 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:12.289910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:12.289975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:12.316441 1261197 cri.go:89] found id: ""
	I1217 00:54:12.316455 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.316462 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:12.316467 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:12.316527 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:12.340348 1261197 cri.go:89] found id: ""
	I1217 00:54:12.340362 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.340370 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:12.340375 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:12.340432 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:12.364082 1261197 cri.go:89] found id: ""
	I1217 00:54:12.364097 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.364104 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:12.364109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:12.364167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:12.390849 1261197 cri.go:89] found id: ""
	I1217 00:54:12.390863 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.390870 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:12.390875 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:12.390933 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:12.420430 1261197 cri.go:89] found id: ""
	I1217 00:54:12.420444 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.420451 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:12.420456 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:12.420518 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:12.448205 1261197 cri.go:89] found id: ""
	I1217 00:54:12.448221 1261197 logs.go:282] 0 containers: []
	W1217 00:54:12.448228 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:12.448236 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:12.448247 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:12.504931 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:12.504952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:12.519968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:12.519985 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:12.584010 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:12.575570   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.576392   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578076   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.578485   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:12.580065   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:12.584021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:12.584032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:12.647102 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:12.647123 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:15.176013 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:15.186921 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:15.186985 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:15.215197 1261197 cri.go:89] found id: ""
	I1217 00:54:15.215211 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.215218 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:15.215226 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:15.215284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:15.240116 1261197 cri.go:89] found id: ""
	I1217 00:54:15.240130 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.240137 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:15.240142 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:15.240201 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:15.267788 1261197 cri.go:89] found id: ""
	I1217 00:54:15.267802 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.267809 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:15.267814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:15.267871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:15.291699 1261197 cri.go:89] found id: ""
	I1217 00:54:15.291713 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.291720 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:15.291725 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:15.291782 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:15.315522 1261197 cri.go:89] found id: ""
	I1217 00:54:15.315536 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.315542 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:15.315548 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:15.315609 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:15.340325 1261197 cri.go:89] found id: ""
	I1217 00:54:15.340339 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.340346 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:15.340361 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:15.340423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:15.369889 1261197 cri.go:89] found id: ""
	I1217 00:54:15.369917 1261197 logs.go:282] 0 containers: []
	W1217 00:54:15.369924 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:15.369932 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:15.369942 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:15.428658 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:15.428679 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:15.444080 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:15.444099 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:15.512831 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:15.504258   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.504866   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506417   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.506903   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:15.508413   12179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:15.512843 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:15.512861 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:15.578043 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:15.578063 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.110567 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:18.120744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:18.120802 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:18.145094 1261197 cri.go:89] found id: ""
	I1217 00:54:18.145108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.145116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:18.145122 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:18.145185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:18.169518 1261197 cri.go:89] found id: ""
	I1217 00:54:18.169532 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.169542 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:18.169547 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:18.169607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:18.194342 1261197 cri.go:89] found id: ""
	I1217 00:54:18.194356 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.194363 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:18.194369 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:18.194427 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:18.222931 1261197 cri.go:89] found id: ""
	I1217 00:54:18.222944 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.222952 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:18.222957 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:18.223015 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:18.246707 1261197 cri.go:89] found id: ""
	I1217 00:54:18.246721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.246728 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:18.246734 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:18.246792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:18.276152 1261197 cri.go:89] found id: ""
	I1217 00:54:18.276172 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.276180 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:18.276185 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:18.276250 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:18.300697 1261197 cri.go:89] found id: ""
	I1217 00:54:18.300711 1261197 logs.go:282] 0 containers: []
	W1217 00:54:18.300718 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:18.300725 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:18.300735 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:18.365628 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:18.357129   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.357756   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.359407   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.360050   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:18.361606   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:18.365661 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:18.365671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:18.437541 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:18.437560 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:18.465122 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:18.465138 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:18.522977 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:18.522997 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.040317 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:21.050538 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:21.050601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:21.074720 1261197 cri.go:89] found id: ""
	I1217 00:54:21.074734 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.074741 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:21.074746 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:21.074808 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:21.099388 1261197 cri.go:89] found id: ""
	I1217 00:54:21.099402 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.099409 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:21.099414 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:21.099471 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:21.123589 1261197 cri.go:89] found id: ""
	I1217 00:54:21.123603 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.123616 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:21.123621 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:21.123680 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:21.149246 1261197 cri.go:89] found id: ""
	I1217 00:54:21.149260 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.149267 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:21.149272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:21.149330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:21.175795 1261197 cri.go:89] found id: ""
	I1217 00:54:21.175809 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.175815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:21.175821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:21.175878 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:21.200104 1261197 cri.go:89] found id: ""
	I1217 00:54:21.200118 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.200125 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:21.200131 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:21.200191 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:21.228601 1261197 cri.go:89] found id: ""
	I1217 00:54:21.228615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:21.228622 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:21.228630 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:21.228642 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:21.285141 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:21.285160 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:21.300538 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:21.300554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:21.368570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:21.359690   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.360441   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362133   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.362671   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:21.364235   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:21.368590 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:21.368601 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:21.438594 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:21.438613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:23.967152 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:23.977246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:23.977330 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:24.002158 1261197 cri.go:89] found id: ""
	I1217 00:54:24.002175 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.002183 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:24.002189 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:24.002297 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:24.034702 1261197 cri.go:89] found id: ""
	I1217 00:54:24.034716 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.034723 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:24.034728 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:24.034788 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:24.059383 1261197 cri.go:89] found id: ""
	I1217 00:54:24.059397 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.059404 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:24.059410 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:24.059466 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:24.088018 1261197 cri.go:89] found id: ""
	I1217 00:54:24.088032 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.088039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:24.088044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:24.088101 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:24.112493 1261197 cri.go:89] found id: ""
	I1217 00:54:24.112507 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.112514 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:24.112519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:24.112575 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:24.139798 1261197 cri.go:89] found id: ""
	I1217 00:54:24.139813 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.139819 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:24.139825 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:24.139886 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:24.164994 1261197 cri.go:89] found id: ""
	I1217 00:54:24.165008 1261197 logs.go:282] 0 containers: []
	W1217 00:54:24.165015 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:24.165022 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:24.165032 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:24.224418 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:24.224438 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:24.239090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:24.239107 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:24.307181 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:24.298410   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.299241   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.300991   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.301309   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:24.302897   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:24.307192 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:24.307203 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:24.369600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:24.369620 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:26.910110 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:26.920271 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:26.920343 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:26.947883 1261197 cri.go:89] found id: ""
	I1217 00:54:26.947897 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.947908 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:26.947913 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:26.947987 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:26.973290 1261197 cri.go:89] found id: ""
	I1217 00:54:26.973304 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.973312 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:26.973318 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:26.973377 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:26.997246 1261197 cri.go:89] found id: ""
	I1217 00:54:26.997261 1261197 logs.go:282] 0 containers: []
	W1217 00:54:26.997268 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:26.997272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:26.997328 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:27.023408 1261197 cri.go:89] found id: ""
	I1217 00:54:27.023422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.023429 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:27.023434 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:27.023494 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:27.051626 1261197 cri.go:89] found id: ""
	I1217 00:54:27.051640 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.051648 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:27.051653 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:27.051713 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:27.076431 1261197 cri.go:89] found id: ""
	I1217 00:54:27.076445 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.076452 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:27.076458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:27.076522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:27.101707 1261197 cri.go:89] found id: ""
	I1217 00:54:27.101721 1261197 logs.go:282] 0 containers: []
	W1217 00:54:27.101728 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:27.101738 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:27.101748 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:27.168764 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:27.159424   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.160157   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162060   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.162697   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:27.164430   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:27.168785 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:27.168797 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:27.233485 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:27.233505 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:27.269682 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:27.269699 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:27.328866 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:27.328887 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:29.845088 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:29.855320 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:29.855384 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:29.880133 1261197 cri.go:89] found id: ""
	I1217 00:54:29.880147 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.880156 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:29.880162 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:29.880233 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:29.905055 1261197 cri.go:89] found id: ""
	I1217 00:54:29.905070 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.905078 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:29.905083 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:29.905141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:29.931379 1261197 cri.go:89] found id: ""
	I1217 00:54:29.931393 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.931400 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:29.931404 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:29.931465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:29.957268 1261197 cri.go:89] found id: ""
	I1217 00:54:29.957283 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.957290 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:29.957296 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:29.957360 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:29.982289 1261197 cri.go:89] found id: ""
	I1217 00:54:29.982303 1261197 logs.go:282] 0 containers: []
	W1217 00:54:29.982311 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:29.982316 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:29.982375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:30.024866 1261197 cri.go:89] found id: ""
	I1217 00:54:30.024883 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.024891 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:30.024898 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:30.024973 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:30.071833 1261197 cri.go:89] found id: ""
	I1217 00:54:30.071852 1261197 logs.go:282] 0 containers: []
	W1217 00:54:30.071861 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:30.071877 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:30.071891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:30.147472 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:30.138339   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.139058   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.140827   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.141510   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:30.143194   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:30.147484 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:30.147497 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:30.211213 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:30.211235 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:30.240355 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:30.240371 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:30.299743 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:30.299761 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:32.815023 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:32.824966 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:32.825040 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:32.849786 1261197 cri.go:89] found id: ""
	I1217 00:54:32.849799 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.849806 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:32.849812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:32.849875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:32.875478 1261197 cri.go:89] found id: ""
	I1217 00:54:32.875491 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.875498 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:32.875503 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:32.875563 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:32.899514 1261197 cri.go:89] found id: ""
	I1217 00:54:32.899528 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.899534 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:32.899539 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:32.899601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:32.923962 1261197 cri.go:89] found id: ""
	I1217 00:54:32.923977 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.923984 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:32.923990 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:32.924067 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:32.948671 1261197 cri.go:89] found id: ""
	I1217 00:54:32.948685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.948692 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:32.948697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:32.948753 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:32.973420 1261197 cri.go:89] found id: ""
	I1217 00:54:32.973434 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.973440 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:32.973446 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:32.973505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:32.997981 1261197 cri.go:89] found id: ""
	I1217 00:54:32.997996 1261197 logs.go:282] 0 containers: []
	W1217 00:54:32.998003 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:32.998010 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:32.998020 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:33.055157 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:33.055177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:33.070286 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:33.070306 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:33.136931 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:33.127195   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.128490   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.129422   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.130970   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:33.131425   12805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:33.136941 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:33.136952 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:33.199432 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:33.199453 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:35.728077 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:35.738194 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:35.738256 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:35.763154 1261197 cri.go:89] found id: ""
	I1217 00:54:35.763169 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.763176 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:35.763182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:35.763238 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:35.787668 1261197 cri.go:89] found id: ""
	I1217 00:54:35.787682 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.787689 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:35.787695 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:35.787751 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:35.811854 1261197 cri.go:89] found id: ""
	I1217 00:54:35.811868 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.811884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:35.811890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:35.811961 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:35.836579 1261197 cri.go:89] found id: ""
	I1217 00:54:35.836594 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.836601 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:35.836607 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:35.836684 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:35.861837 1261197 cri.go:89] found id: ""
	I1217 00:54:35.861851 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.861858 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:35.861863 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:35.861921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:35.886709 1261197 cri.go:89] found id: ""
	I1217 00:54:35.886723 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.886730 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:35.886736 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:35.886792 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:35.910235 1261197 cri.go:89] found id: ""
	I1217 00:54:35.910248 1261197 logs.go:282] 0 containers: []
	W1217 00:54:35.910255 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:35.910275 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:35.910285 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:35.966535 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:35.966553 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:35.981143 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:35.981169 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:36.045220 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:36.037007   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.037415   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039070   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.039887   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:36.041555   12909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:36.045231 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:36.045241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:36.106277 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:36.106296 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:38.637781 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:38.649664 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:38.649725 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:38.691238 1261197 cri.go:89] found id: ""
	I1217 00:54:38.691252 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.691259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:38.691264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:38.691322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:38.716035 1261197 cri.go:89] found id: ""
	I1217 00:54:38.716049 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.716055 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:38.716066 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:38.716125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:38.740603 1261197 cri.go:89] found id: ""
	I1217 00:54:38.740616 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.740624 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:38.740629 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:38.740687 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:38.766239 1261197 cri.go:89] found id: ""
	I1217 00:54:38.766253 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.766260 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:38.766266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:38.766324 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:38.791492 1261197 cri.go:89] found id: ""
	I1217 00:54:38.791506 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.791513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:38.791519 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:38.791579 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:38.816435 1261197 cri.go:89] found id: ""
	I1217 00:54:38.816449 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.816456 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:38.816461 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:38.816520 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:38.841085 1261197 cri.go:89] found id: ""
	I1217 00:54:38.841099 1261197 logs.go:282] 0 containers: []
	W1217 00:54:38.841107 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:38.841114 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:38.841124 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:38.896837 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:38.896856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:38.911640 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:38.911658 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:38.976373 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:38.967894   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.968508   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970302   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.970953   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:38.972582   13016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:38.976383 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:38.976393 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:39.037751 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:39.037771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.567032 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:41.578116 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:41.578182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:41.603748 1261197 cri.go:89] found id: ""
	I1217 00:54:41.603762 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.603770 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:41.603775 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:41.603833 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:41.634998 1261197 cri.go:89] found id: ""
	I1217 00:54:41.635012 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.635019 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:41.635024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:41.635080 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:41.678283 1261197 cri.go:89] found id: ""
	I1217 00:54:41.678297 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.678307 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:41.678312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:41.678375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:41.704945 1261197 cri.go:89] found id: ""
	I1217 00:54:41.704960 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.704967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:41.704977 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:41.705035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:41.729909 1261197 cri.go:89] found id: ""
	I1217 00:54:41.729923 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.729930 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:41.729936 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:41.730019 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:41.754648 1261197 cri.go:89] found id: ""
	I1217 00:54:41.754662 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.754669 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:41.754675 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:41.754734 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:41.779433 1261197 cri.go:89] found id: ""
	I1217 00:54:41.779448 1261197 logs.go:282] 0 containers: []
	W1217 00:54:41.779455 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:41.779463 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:41.779474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:41.793989 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:41.794006 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:41.858584 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:41.850555   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.851085   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.852831   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.853160   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:41.854635   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:41.858594 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:41.858605 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:41.923655 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:41.923682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:41.950619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:41.950638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:44.507762 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:44.517733 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:44.517793 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:44.541892 1261197 cri.go:89] found id: ""
	I1217 00:54:44.541905 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.541924 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:44.541929 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:44.541986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:44.570803 1261197 cri.go:89] found id: ""
	I1217 00:54:44.570818 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.570824 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:44.570830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:44.570889 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:44.599324 1261197 cri.go:89] found id: ""
	I1217 00:54:44.599338 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.599345 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:44.599351 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:44.599412 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:44.632615 1261197 cri.go:89] found id: ""
	I1217 00:54:44.632629 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.632637 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:44.632643 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:44.632705 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:44.659976 1261197 cri.go:89] found id: ""
	I1217 00:54:44.659989 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.660009 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:44.660015 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:44.660085 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:44.688987 1261197 cri.go:89] found id: ""
	I1217 00:54:44.689000 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.689007 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:44.689013 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:44.689069 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:44.712988 1261197 cri.go:89] found id: ""
	I1217 00:54:44.713002 1261197 logs.go:282] 0 containers: []
	W1217 00:54:44.713010 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:44.713018 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:44.713030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:44.727473 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:44.727489 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:44.794008 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:44.786068   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.786467   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788049   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.788609   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:44.790125   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:44.794021 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:44.794031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:44.855600 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:44.855621 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:44.883007 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:44.883023 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.442293 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:47.452401 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:47.452465 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:47.475940 1261197 cri.go:89] found id: ""
	I1217 00:54:47.475953 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.475960 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:47.475965 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:47.476021 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:47.500287 1261197 cri.go:89] found id: ""
	I1217 00:54:47.500302 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.500309 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:47.500314 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:47.500371 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:47.537066 1261197 cri.go:89] found id: ""
	I1217 00:54:47.537080 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.537087 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:47.537091 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:47.537147 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:47.561363 1261197 cri.go:89] found id: ""
	I1217 00:54:47.561377 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.561384 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:47.561390 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:47.561446 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:47.586917 1261197 cri.go:89] found id: ""
	I1217 00:54:47.586931 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.586939 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:47.586944 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:47.587006 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:47.611775 1261197 cri.go:89] found id: ""
	I1217 00:54:47.611789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.611796 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:47.611805 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:47.611862 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:47.648123 1261197 cri.go:89] found id: ""
	I1217 00:54:47.648137 1261197 logs.go:282] 0 containers: []
	W1217 00:54:47.648145 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:47.648152 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:47.648163 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:47.716428 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:47.716447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:47.732842 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:47.732876 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:47.801539 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:47.792820   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.793596   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795104   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.795641   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:47.797268   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:47.801549 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:47.801559 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:47.863256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:47.863276 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:50.394435 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:50.404927 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:50.404986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:50.429607 1261197 cri.go:89] found id: ""
	I1217 00:54:50.429621 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.429628 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:50.429634 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:50.429731 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:50.454601 1261197 cri.go:89] found id: ""
	I1217 00:54:50.454615 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.454622 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:50.454627 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:50.454689 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:50.484855 1261197 cri.go:89] found id: ""
	I1217 00:54:50.484877 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.484884 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:50.484890 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:50.484950 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:50.510003 1261197 cri.go:89] found id: ""
	I1217 00:54:50.510018 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.510025 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:50.510030 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:50.510089 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:50.533511 1261197 cri.go:89] found id: ""
	I1217 00:54:50.533525 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.533532 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:50.533537 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:50.533602 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:50.558386 1261197 cri.go:89] found id: ""
	I1217 00:54:50.558400 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.558407 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:50.558419 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:50.558476 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:50.587409 1261197 cri.go:89] found id: ""
	I1217 00:54:50.587422 1261197 logs.go:282] 0 containers: []
	W1217 00:54:50.587429 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:50.587437 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:50.587447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:50.644042 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:50.644061 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:50.661242 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:50.661257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:50.732592 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:50.724504   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.724955   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726511   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.726969   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:50.728497   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:50.732602 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:50.732613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:50.793447 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:50.793466 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:53.322439 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:53.332470 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:53.332535 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:53.357094 1261197 cri.go:89] found id: ""
	I1217 00:54:53.357108 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.357116 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:53.357121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:53.357182 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:53.381629 1261197 cri.go:89] found id: ""
	I1217 00:54:53.381667 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.381674 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:53.381679 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:53.381743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:53.407630 1261197 cri.go:89] found id: ""
	I1217 00:54:53.407644 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.407651 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:53.407656 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:53.407718 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:53.435972 1261197 cri.go:89] found id: ""
	I1217 00:54:53.435986 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.435993 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:53.435999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:53.436059 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:53.461545 1261197 cri.go:89] found id: ""
	I1217 00:54:53.461558 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.461565 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:53.461570 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:53.461629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:53.491744 1261197 cri.go:89] found id: ""
	I1217 00:54:53.491758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.491766 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:53.491771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:53.491836 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:53.517147 1261197 cri.go:89] found id: ""
	I1217 00:54:53.517161 1261197 logs.go:282] 0 containers: []
	W1217 00:54:53.517170 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:53.517177 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:53.517188 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:53.573158 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:53.573177 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:53.588088 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:53.588104 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:53.665911 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:53.656341   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.657239   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659336   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.659633   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:53.662117   13539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:53.665933 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:53.665945 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:53.735506 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:53.735530 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.268624 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:56.279995 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:56.280060 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:56.304847 1261197 cri.go:89] found id: ""
	I1217 00:54:56.304874 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.304881 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:56.304887 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:56.304952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:56.329820 1261197 cri.go:89] found id: ""
	I1217 00:54:56.329834 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.329841 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:56.329846 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:56.329902 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:56.354667 1261197 cri.go:89] found id: ""
	I1217 00:54:56.354685 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.354695 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:56.354700 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:56.354779 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:56.383823 1261197 cri.go:89] found id: ""
	I1217 00:54:56.383837 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.383844 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:56.383850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:56.383907 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:56.408219 1261197 cri.go:89] found id: ""
	I1217 00:54:56.408233 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.408240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:56.408246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:56.408305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:56.433745 1261197 cri.go:89] found id: ""
	I1217 00:54:56.433758 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.433765 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:56.433771 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:56.433843 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:56.458631 1261197 cri.go:89] found id: ""
	I1217 00:54:56.458645 1261197 logs.go:282] 0 containers: []
	W1217 00:54:56.458653 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:56.458660 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:56.458671 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:56.473217 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:56.473233 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:56.540570 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:56.531397   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.532121   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534006   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.534683   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.536305   13639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:56.540579 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:56.540591 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:56.605775 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:56.605795 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:56.659436 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:56.659452 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.225973 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:54:59.236165 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:54:59.236223 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:54:59.262172 1261197 cri.go:89] found id: ""
	I1217 00:54:59.262185 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.262193 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:54:59.262198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:54:59.262254 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:54:59.286403 1261197 cri.go:89] found id: ""
	I1217 00:54:59.286417 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.286425 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:54:59.286430 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:54:59.286489 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:54:59.311254 1261197 cri.go:89] found id: ""
	I1217 00:54:59.311268 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.311276 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:54:59.311280 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:54:59.311336 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:54:59.339495 1261197 cri.go:89] found id: ""
	I1217 00:54:59.339510 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.339519 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:54:59.339524 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:54:59.339583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:54:59.364038 1261197 cri.go:89] found id: ""
	I1217 00:54:59.364052 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.364068 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:54:59.364074 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:54:59.364130 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:54:59.388359 1261197 cri.go:89] found id: ""
	I1217 00:54:59.388373 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.388391 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:54:59.388396 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:54:59.388462 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:54:59.412775 1261197 cri.go:89] found id: ""
	I1217 00:54:59.412789 1261197 logs.go:282] 0 containers: []
	W1217 00:54:59.412806 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:54:59.412815 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:54:59.412824 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:54:59.475190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:54:59.475211 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:54:59.504917 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:54:59.504933 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:54:59.561462 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:54:59.561481 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:54:59.576156 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:54:59.576171 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:59.641179 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:59.633086   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634094   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.634928   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.635697   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:59.637181   13762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.141436 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:02.152012 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:02.152075 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:02.180949 1261197 cri.go:89] found id: ""
	I1217 00:55:02.180963 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.180970 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:02.180976 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:02.181046 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:02.204892 1261197 cri.go:89] found id: ""
	I1217 00:55:02.204915 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.204922 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:02.204928 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:02.205035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:02.230226 1261197 cri.go:89] found id: ""
	I1217 00:55:02.230239 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.230247 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:02.230252 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:02.230309 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:02.254922 1261197 cri.go:89] found id: ""
	I1217 00:55:02.254936 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.254944 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:02.254949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:02.255012 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:02.279652 1261197 cri.go:89] found id: ""
	I1217 00:55:02.279666 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.279673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:02.279678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:02.279737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:02.306126 1261197 cri.go:89] found id: ""
	I1217 00:55:02.306139 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.306146 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:02.306152 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:02.306209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:02.330968 1261197 cri.go:89] found id: ""
	I1217 00:55:02.330982 1261197 logs.go:282] 0 containers: []
	W1217 00:55:02.330989 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:02.330997 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:02.331007 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:02.386453 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:02.386473 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:02.401019 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:02.401036 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:02.462681 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:02.454421   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.455077   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.456779   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.457349   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:02.458833   13855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:02.462691 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:02.462701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:02.523460 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:02.523480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.051274 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:05.061850 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:05.061924 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:05.087077 1261197 cri.go:89] found id: ""
	I1217 00:55:05.087092 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.087099 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:05.087105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:05.087167 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:05.113592 1261197 cri.go:89] found id: ""
	I1217 00:55:05.113607 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.113614 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:05.113620 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:05.113702 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:05.139004 1261197 cri.go:89] found id: ""
	I1217 00:55:05.139019 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.139026 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:05.139031 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:05.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:05.163703 1261197 cri.go:89] found id: ""
	I1217 00:55:05.163717 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.163725 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:05.163731 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:05.163791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:05.188990 1261197 cri.go:89] found id: ""
	I1217 00:55:05.189004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.189011 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:05.189024 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:05.189083 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:05.218147 1261197 cri.go:89] found id: ""
	I1217 00:55:05.218161 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.218168 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:05.218174 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:05.218246 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:05.242561 1261197 cri.go:89] found id: ""
	I1217 00:55:05.242575 1261197 logs.go:282] 0 containers: []
	W1217 00:55:05.242592 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:05.242600 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:05.242610 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:05.303683 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:05.303701 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:05.331484 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:05.331499 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:05.392845 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:05.392868 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:05.407882 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:05.407898 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:05.474193 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:05.465537   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.466393   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468098   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.468649   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:05.470359   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:07.974416 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:07.984527 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:07.984588 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:08.011706 1261197 cri.go:89] found id: ""
	I1217 00:55:08.011722 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.011730 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:08.011735 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:08.011803 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:08.038984 1261197 cri.go:89] found id: ""
	I1217 00:55:08.038998 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.039005 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:08.039011 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:08.039072 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:08.066839 1261197 cri.go:89] found id: ""
	I1217 00:55:08.066854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.066861 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:08.066866 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:08.066928 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:08.096940 1261197 cri.go:89] found id: ""
	I1217 00:55:08.096954 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.096962 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:08.096968 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:08.097026 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:08.124219 1261197 cri.go:89] found id: ""
	I1217 00:55:08.124232 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.124240 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:08.124245 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:08.124308 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:08.149339 1261197 cri.go:89] found id: ""
	I1217 00:55:08.149353 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.149360 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:08.149365 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:08.149424 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:08.173327 1261197 cri.go:89] found id: ""
	I1217 00:55:08.173350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:08.173358 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:08.173366 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:08.173376 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:08.229871 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:08.229891 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:08.244853 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:08.244877 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:08.312062 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:08.303447   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.304197   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.305960   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.306611   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:08.308332   14069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:08.312072 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:08.312082 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:08.373219 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:08.373238 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:10.901813 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:10.913062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:10.913131 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:10.939973 1261197 cri.go:89] found id: ""
	I1217 00:55:10.939987 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.939994 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:10.939999 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:10.940057 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:10.965488 1261197 cri.go:89] found id: ""
	I1217 00:55:10.965502 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.965509 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:10.965514 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:10.965574 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:10.990743 1261197 cri.go:89] found id: ""
	I1217 00:55:10.990758 1261197 logs.go:282] 0 containers: []
	W1217 00:55:10.990766 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:10.990772 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:10.990851 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:11.017298 1261197 cri.go:89] found id: ""
	I1217 00:55:11.017322 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.017330 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:11.017336 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:11.017405 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:11.043148 1261197 cri.go:89] found id: ""
	I1217 00:55:11.043163 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.043170 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:11.043175 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:11.043236 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:11.070182 1261197 cri.go:89] found id: ""
	I1217 00:55:11.070196 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.070207 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:11.070213 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:11.070284 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:11.098403 1261197 cri.go:89] found id: ""
	I1217 00:55:11.098419 1261197 logs.go:282] 0 containers: []
	W1217 00:55:11.098426 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:11.098434 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:11.098445 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:11.154712 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:11.154732 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:11.171447 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:11.171469 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:11.235332 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:11.227431   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.227826   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229545   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.229918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:11.231398   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:11.235344 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:11.235354 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:11.298591 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:11.298611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:13.826200 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:13.836246 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:13.836303 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:13.861099 1261197 cri.go:89] found id: ""
	I1217 00:55:13.861113 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.861120 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:13.861125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:13.861183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:13.898315 1261197 cri.go:89] found id: ""
	I1217 00:55:13.898328 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.898335 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:13.898340 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:13.898403 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:13.927870 1261197 cri.go:89] found id: ""
	I1217 00:55:13.927884 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.927902 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:13.927908 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:13.927986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:13.956407 1261197 cri.go:89] found id: ""
	I1217 00:55:13.956421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.956428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:13.956433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:13.956500 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:13.981521 1261197 cri.go:89] found id: ""
	I1217 00:55:13.981553 1261197 logs.go:282] 0 containers: []
	W1217 00:55:13.981560 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:13.981565 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:13.981630 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:14.007326 1261197 cri.go:89] found id: ""
	I1217 00:55:14.007350 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.007358 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:14.007364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:14.007433 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:14.034794 1261197 cri.go:89] found id: ""
	I1217 00:55:14.034809 1261197 logs.go:282] 0 containers: []
	W1217 00:55:14.034816 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:14.034824 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:14.034835 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:14.091355 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:14.091375 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:14.106561 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:14.106579 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:14.176400 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:14.168662   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.169316   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.170714   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.171141   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:14.172630   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:14.176410 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:14.176420 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:14.242568 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:14.242593 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:16.776330 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:16.786496 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:16.786558 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:16.811486 1261197 cri.go:89] found id: ""
	I1217 00:55:16.811500 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.811507 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:16.811512 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:16.811576 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:16.839885 1261197 cri.go:89] found id: ""
	I1217 00:55:16.839898 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.839905 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:16.839910 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:16.839972 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:16.865332 1261197 cri.go:89] found id: ""
	I1217 00:55:16.865346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.865353 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:16.865359 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:16.865419 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:16.904044 1261197 cri.go:89] found id: ""
	I1217 00:55:16.904058 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.904065 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:16.904071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:16.904133 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:16.934495 1261197 cri.go:89] found id: ""
	I1217 00:55:16.934508 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.934515 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:16.934521 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:16.934582 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:16.959038 1261197 cri.go:89] found id: ""
	I1217 00:55:16.959052 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.959060 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:16.959065 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:16.959123 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:16.987609 1261197 cri.go:89] found id: ""
	I1217 00:55:16.987622 1261197 logs.go:282] 0 containers: []
	W1217 00:55:16.987630 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:16.987637 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:16.987647 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:17.046635 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:17.046655 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:17.062321 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:17.062345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:17.130440 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:17.121381   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.122096   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.123717   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.124272   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:17.126062   14378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:17.130450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:17.130460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:17.192501 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:17.192521 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:19.724677 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:19.736386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:19.736459 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:19.763100 1261197 cri.go:89] found id: ""
	I1217 00:55:19.763114 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.763121 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:19.763127 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:19.763185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:19.791470 1261197 cri.go:89] found id: ""
	I1217 00:55:19.791483 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.791490 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:19.791495 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:19.791552 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:19.816395 1261197 cri.go:89] found id: ""
	I1217 00:55:19.816410 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.816417 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:19.816422 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:19.816482 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:19.841971 1261197 cri.go:89] found id: ""
	I1217 00:55:19.841984 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.841991 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:19.841997 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:19.842058 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:19.866385 1261197 cri.go:89] found id: ""
	I1217 00:55:19.866399 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.866406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:19.866411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:19.866468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:19.904121 1261197 cri.go:89] found id: ""
	I1217 00:55:19.904135 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.904153 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:19.904160 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:19.904217 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:19.940290 1261197 cri.go:89] found id: ""
	I1217 00:55:19.940304 1261197 logs.go:282] 0 containers: []
	W1217 00:55:19.940311 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:19.940319 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:19.940329 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:19.955177 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:19.955193 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:20.024806 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:20.015631   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.016294   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018094   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.018616   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:20.020222   14480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:20.024817 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:20.024830 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:20.088972 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:20.088996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:20.122058 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:20.122075 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:22.679929 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:22.690102 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:22.690162 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:22.717462 1261197 cri.go:89] found id: ""
	I1217 00:55:22.717476 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.717483 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:22.717489 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:22.717550 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:22.744363 1261197 cri.go:89] found id: ""
	I1217 00:55:22.744377 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.744390 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:22.744395 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:22.744454 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:22.770975 1261197 cri.go:89] found id: ""
	I1217 00:55:22.770989 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.770996 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:22.771001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:22.771068 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:22.795702 1261197 cri.go:89] found id: ""
	I1217 00:55:22.795716 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.795724 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:22.795729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:22.795787 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:22.820186 1261197 cri.go:89] found id: ""
	I1217 00:55:22.820200 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.820206 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:22.820212 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:22.820269 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:22.844518 1261197 cri.go:89] found id: ""
	I1217 00:55:22.844533 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.844540 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:22.844545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:22.844604 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:22.884821 1261197 cri.go:89] found id: ""
	I1217 00:55:22.884834 1261197 logs.go:282] 0 containers: []
	W1217 00:55:22.884841 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:22.884849 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:22.884860 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:22.901504 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:22.901520 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:22.975115 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:22.967246   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.967652   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969292   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.969703   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:22.971149   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:22.975125 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:22.975135 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:23.036546 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:23.036566 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:23.070681 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:23.070697 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.627462 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:25.638109 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:25.638168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:25.671791 1261197 cri.go:89] found id: ""
	I1217 00:55:25.671806 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.671813 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:25.671821 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:25.671884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:25.696990 1261197 cri.go:89] found id: ""
	I1217 00:55:25.697004 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.697011 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:25.697016 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:25.697082 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:25.722087 1261197 cri.go:89] found id: ""
	I1217 00:55:25.722101 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.722110 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:25.722115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:25.722184 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:25.747407 1261197 cri.go:89] found id: ""
	I1217 00:55:25.747421 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.747428 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:25.747433 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:25.747495 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:25.772602 1261197 cri.go:89] found id: ""
	I1217 00:55:25.772617 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.772623 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:25.772628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:25.772694 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:25.802452 1261197 cri.go:89] found id: ""
	I1217 00:55:25.802466 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.802473 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:25.802478 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:25.802538 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:25.827066 1261197 cri.go:89] found id: ""
	I1217 00:55:25.827081 1261197 logs.go:282] 0 containers: []
	W1217 00:55:25.827088 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:25.827096 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:25.827109 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:25.886656 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:25.886676 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:25.903090 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:25.903108 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:25.973568 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:25.964918   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.965711   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.967501   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.968137   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:25.969777   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:25.973578 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:25.973587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:26.036642 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:26.036662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:28.571573 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:28.581543 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:28.581601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:28.607202 1261197 cri.go:89] found id: ""
	I1217 00:55:28.607216 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.607224 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:28.607229 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:28.607288 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:28.630842 1261197 cri.go:89] found id: ""
	I1217 00:55:28.630857 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.630864 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:28.630869 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:28.630927 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:28.656052 1261197 cri.go:89] found id: ""
	I1217 00:55:28.656066 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.656073 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:28.656079 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:28.656135 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:28.680008 1261197 cri.go:89] found id: ""
	I1217 00:55:28.680022 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.680029 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:28.680034 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:28.680104 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:28.704668 1261197 cri.go:89] found id: ""
	I1217 00:55:28.704682 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.704689 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:28.704694 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:28.704756 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:28.733961 1261197 cri.go:89] found id: ""
	I1217 00:55:28.733974 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.733981 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:28.733986 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:28.734042 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:28.759990 1261197 cri.go:89] found id: ""
	I1217 00:55:28.760005 1261197 logs.go:282] 0 containers: []
	W1217 00:55:28.760013 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:28.760021 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:28.760030 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:28.815642 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:28.815661 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:28.830313 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:28.830333 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:28.907265 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:28.899318   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.899681   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901186   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.901835   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:28.903377   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:28.907287 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:28.907299 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:28.978223 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:28.978244 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:31.508374 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:31.518631 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:31.518696 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:31.543672 1261197 cri.go:89] found id: ""
	I1217 00:55:31.543686 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.543693 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:31.543701 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:31.543760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:31.568914 1261197 cri.go:89] found id: ""
	I1217 00:55:31.568929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.568944 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:31.568949 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:31.569017 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:31.593432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.593453 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.593461 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:31.593466 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:31.593537 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:31.619217 1261197 cri.go:89] found id: ""
	I1217 00:55:31.619231 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.619238 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:31.619243 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:31.619299 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:31.647432 1261197 cri.go:89] found id: ""
	I1217 00:55:31.647445 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.647453 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:31.647458 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:31.647522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:31.675117 1261197 cri.go:89] found id: ""
	I1217 00:55:31.675130 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.675138 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:31.675143 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:31.675200 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:31.698973 1261197 cri.go:89] found id: ""
	I1217 00:55:31.698986 1261197 logs.go:282] 0 containers: []
	W1217 00:55:31.698993 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:31.699001 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:31.699010 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:31.754429 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:31.754447 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:31.768968 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:31.768984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:31.831791 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:31.823136   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.823971   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825441   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.825953   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.827502   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:31.831801 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:31.831811 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:31.900759 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:31.900777 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:34.429727 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:34.440562 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:34.440629 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:34.465412 1261197 cri.go:89] found id: ""
	I1217 00:55:34.465425 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.465433 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:34.465438 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:34.465496 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:34.489937 1261197 cri.go:89] found id: ""
	I1217 00:55:34.489951 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.489978 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:34.489987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:34.490055 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:34.520581 1261197 cri.go:89] found id: ""
	I1217 00:55:34.520602 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.520610 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:34.520615 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:34.520682 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:34.547718 1261197 cri.go:89] found id: ""
	I1217 00:55:34.547732 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.547739 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:34.547744 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:34.547806 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:34.572103 1261197 cri.go:89] found id: ""
	I1217 00:55:34.572116 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.572133 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:34.572138 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:34.572209 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:34.600789 1261197 cri.go:89] found id: ""
	I1217 00:55:34.600819 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.600827 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:34.600832 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:34.600921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:34.627220 1261197 cri.go:89] found id: ""
	I1217 00:55:34.627234 1261197 logs.go:282] 0 containers: []
	W1217 00:55:34.627240 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:34.627248 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:34.627257 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:34.682307 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:34.682327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:34.697255 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:34.697271 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:34.764504 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:34.756282   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.757017   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758548   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.758914   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:34.760473   14999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:34.764515 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:34.764525 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:34.826010 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:34.826029 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.353119 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:37.363135 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:37.363198 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:37.387754 1261197 cri.go:89] found id: ""
	I1217 00:55:37.387773 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.387781 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:37.387787 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:37.387845 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:37.413391 1261197 cri.go:89] found id: ""
	I1217 00:55:37.413404 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.413411 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:37.413417 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:37.413474 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:37.439523 1261197 cri.go:89] found id: ""
	I1217 00:55:37.439537 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.439544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:37.439549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:37.439607 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:37.469209 1261197 cri.go:89] found id: ""
	I1217 00:55:37.469223 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.469230 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:37.469235 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:37.469296 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:37.495794 1261197 cri.go:89] found id: ""
	I1217 00:55:37.495807 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.495814 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:37.495819 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:37.495875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:37.520612 1261197 cri.go:89] found id: ""
	I1217 00:55:37.520625 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.520642 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:37.520648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:37.520720 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:37.547269 1261197 cri.go:89] found id: ""
	I1217 00:55:37.547283 1261197 logs.go:282] 0 containers: []
	W1217 00:55:37.547290 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:37.547299 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:37.547308 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:37.608835 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:37.608856 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:37.635364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:37.635383 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:37.694966 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:37.694984 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:37.709746 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:37.709763 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:37.775515 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:37.766923   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.767602   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769315   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.769982   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:37.771527   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.277182 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:40.287332 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:40.287393 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:40.315852 1261197 cri.go:89] found id: ""
	I1217 00:55:40.315866 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.315873 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:40.315879 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:40.315936 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:40.340196 1261197 cri.go:89] found id: ""
	I1217 00:55:40.340210 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.340217 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:40.340222 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:40.340279 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:40.365794 1261197 cri.go:89] found id: ""
	I1217 00:55:40.365815 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.365823 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:40.365828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:40.365899 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:40.391466 1261197 cri.go:89] found id: ""
	I1217 00:55:40.391480 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.391488 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:40.391493 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:40.391553 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:40.420286 1261197 cri.go:89] found id: ""
	I1217 00:55:40.420300 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.420307 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:40.420312 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:40.420373 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:40.449247 1261197 cri.go:89] found id: ""
	I1217 00:55:40.449261 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.449268 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:40.449274 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:40.449331 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:40.474951 1261197 cri.go:89] found id: ""
	I1217 00:55:40.474965 1261197 logs.go:282] 0 containers: []
	W1217 00:55:40.474972 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:40.474980 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:40.474990 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:40.540502 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:40.532003   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.532778   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534415   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.534923   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:40.536671   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:40.540513 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:40.540524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:40.602747 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:40.602766 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:40.629888 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:40.629904 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:40.686174 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:40.686191 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.201825 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:43.212126 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:43.212185 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:43.237088 1261197 cri.go:89] found id: ""
	I1217 00:55:43.237109 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.237115 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:43.237121 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:43.237183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:43.262148 1261197 cri.go:89] found id: ""
	I1217 00:55:43.262162 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.262177 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:43.262182 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:43.262239 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:43.286264 1261197 cri.go:89] found id: ""
	I1217 00:55:43.286278 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.286285 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:43.286290 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:43.286346 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:43.310644 1261197 cri.go:89] found id: ""
	I1217 00:55:43.310657 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.310664 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:43.310670 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:43.310730 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:43.335131 1261197 cri.go:89] found id: ""
	I1217 00:55:43.335146 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.335153 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:43.335158 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:43.335220 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:43.364301 1261197 cri.go:89] found id: ""
	I1217 00:55:43.364315 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.364323 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:43.364331 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:43.364390 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:43.391204 1261197 cri.go:89] found id: ""
	I1217 00:55:43.391218 1261197 logs.go:282] 0 containers: []
	W1217 00:55:43.391225 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:43.391233 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:43.391252 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:43.450751 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:43.450771 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:43.466709 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:43.466726 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:43.533713 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:43.525325   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.526016   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.527599   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.528061   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.529603   15320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:43.533723 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:43.533734 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:43.601250 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:43.601269 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.134875 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:46.146399 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:46.146468 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:46.179014 1261197 cri.go:89] found id: ""
	I1217 00:55:46.179028 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.179044 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:46.179050 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:46.179115 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:46.208346 1261197 cri.go:89] found id: ""
	I1217 00:55:46.208360 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.208377 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:46.208383 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:46.208441 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:46.233331 1261197 cri.go:89] found id: ""
	I1217 00:55:46.233346 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.233361 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:46.233367 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:46.233423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:46.259330 1261197 cri.go:89] found id: ""
	I1217 00:55:46.259344 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.259351 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:46.259357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:46.259413 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:46.283871 1261197 cri.go:89] found id: ""
	I1217 00:55:46.283885 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.283902 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:46.283907 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:46.283975 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:46.308301 1261197 cri.go:89] found id: ""
	I1217 00:55:46.308316 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.308331 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:46.308337 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:46.308397 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:46.332677 1261197 cri.go:89] found id: ""
	I1217 00:55:46.332691 1261197 logs.go:282] 0 containers: []
	W1217 00:55:46.332699 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:46.332706 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:46.332716 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:46.347830 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:46.347846 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:46.413688 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:46.405034   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.405738   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407339   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.407807   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:46.409369   15422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:46.413699 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:46.413709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:46.475238 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:46.475260 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:46.502692 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:46.502708 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.063356 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:49.074298 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:49.074364 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:49.102541 1261197 cri.go:89] found id: ""
	I1217 00:55:49.102555 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.102562 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:49.102567 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:49.102625 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:49.132690 1261197 cri.go:89] found id: ""
	I1217 00:55:49.132706 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.132713 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:49.132718 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:49.132780 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:49.159962 1261197 cri.go:89] found id: ""
	I1217 00:55:49.159976 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.159983 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:49.159987 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:49.160047 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:49.186672 1261197 cri.go:89] found id: ""
	I1217 00:55:49.186685 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.186692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:49.186703 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:49.186760 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:49.215488 1261197 cri.go:89] found id: ""
	I1217 00:55:49.215506 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.215513 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:49.215518 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:49.215594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:49.243652 1261197 cri.go:89] found id: ""
	I1217 00:55:49.243667 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.243674 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:49.243680 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:49.243746 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:49.271745 1261197 cri.go:89] found id: ""
	I1217 00:55:49.271762 1261197 logs.go:282] 0 containers: []
	W1217 00:55:49.271769 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:49.271777 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:49.271789 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:49.305614 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:49.305638 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:49.361396 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:49.361414 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:49.377081 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:49.377097 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:49.448394 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:49.440321   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.441054   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.442751   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.443148   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:49.444645   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:49.448405 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:49.448416 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.014619 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:52.025272 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:52.025334 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:52.050179 1261197 cri.go:89] found id: ""
	I1217 00:55:52.050193 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.050201 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:52.050206 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:52.050267 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:52.075171 1261197 cri.go:89] found id: ""
	I1217 00:55:52.075186 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.075193 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:52.075198 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:52.075258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:52.100730 1261197 cri.go:89] found id: ""
	I1217 00:55:52.100745 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.100752 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:52.100758 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:52.100819 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:52.139001 1261197 cri.go:89] found id: ""
	I1217 00:55:52.139016 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.139023 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:52.139028 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:52.139091 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:52.167837 1261197 cri.go:89] found id: ""
	I1217 00:55:52.167854 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.167861 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:52.167876 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:52.167939 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:52.195893 1261197 cri.go:89] found id: ""
	I1217 00:55:52.195907 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.195914 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:52.195919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:52.195986 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:52.226474 1261197 cri.go:89] found id: ""
	I1217 00:55:52.226489 1261197 logs.go:282] 0 containers: []
	W1217 00:55:52.226496 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:52.226504 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:52.226514 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:52.283106 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:52.283125 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:52.298214 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:52.298230 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:52.368183 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:52.359664   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.360347   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362149   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.362749   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:52.364346   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:52.368194 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:52.368205 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:52.430851 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:52.430873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:54.962672 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.972814 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:54.972874 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:54.998554 1261197 cri.go:89] found id: ""
	I1217 00:55:54.998568 1261197 logs.go:282] 0 containers: []
	W1217 00:55:54.998575 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:54.998580 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:54.998640 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:55.027159 1261197 cri.go:89] found id: ""
	I1217 00:55:55.027174 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.027181 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:55.027187 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:55.027258 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:55.057204 1261197 cri.go:89] found id: ""
	I1217 00:55:55.057219 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.057226 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:55.057241 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:55.057302 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:55.082858 1261197 cri.go:89] found id: ""
	I1217 00:55:55.082872 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.082880 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:55.082885 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:55.082952 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:55.108074 1261197 cri.go:89] found id: ""
	I1217 00:55:55.108088 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.108095 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:55.108100 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:55.108168 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:55.142170 1261197 cri.go:89] found id: ""
	I1217 00:55:55.142184 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.142204 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:55.142210 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:55.142277 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:55.174306 1261197 cri.go:89] found id: ""
	I1217 00:55:55.174333 1261197 logs.go:282] 0 containers: []
	W1217 00:55:55.174341 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:55.174349 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:55.174361 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:55.234605 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:55.234625 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:55:55.249756 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:55.249773 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:55.312439 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:55.304096   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.304861   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.306588   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.307122   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:55.308674   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:55.312450 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:55.312460 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:55.373256 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:55.373275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:57.900997 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.911464 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:55:57.911522 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:55:57.936082 1261197 cri.go:89] found id: ""
	I1217 00:55:57.936096 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.936104 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:55:57.936115 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:55:57.936172 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:55:57.960175 1261197 cri.go:89] found id: ""
	I1217 00:55:57.960190 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.960197 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:55:57.960202 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:55:57.960266 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:55:57.986658 1261197 cri.go:89] found id: ""
	I1217 00:55:57.986671 1261197 logs.go:282] 0 containers: []
	W1217 00:55:57.986678 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:55:57.986684 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:55:57.986743 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:55:58.012944 1261197 cri.go:89] found id: ""
	I1217 00:55:58.012959 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.012967 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:55:58.012973 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:55:58.013035 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:55:58.041226 1261197 cri.go:89] found id: ""
	I1217 00:55:58.041241 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.041248 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:55:58.041253 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:55:58.041319 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:55:58.066914 1261197 cri.go:89] found id: ""
	I1217 00:55:58.066929 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.066937 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:55:58.066943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:55:58.067000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:55:58.090571 1261197 cri.go:89] found id: ""
	I1217 00:55:58.090586 1261197 logs.go:282] 0 containers: []
	W1217 00:55:58.090593 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:55:58.090601 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:55:58.090611 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:55:58.161546 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:55:58.153473   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.154320   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.155853   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.156155   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:58.157630   15828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:55:58.161556 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:55:58.161578 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:55:58.230111 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:55:58.230131 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:55:58.259134 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:55:58.259150 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:55:58.315698 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:55:58.315715 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:00.831924 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.842106 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:00.842166 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:00.868037 1261197 cri.go:89] found id: ""
	I1217 00:56:00.868051 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.868057 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:00.868062 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:00.868138 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:00.893020 1261197 cri.go:89] found id: ""
	I1217 00:56:00.893046 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.893053 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:00.893059 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:00.893125 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:00.918054 1261197 cri.go:89] found id: ""
	I1217 00:56:00.918068 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.918075 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:00.918081 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:00.918139 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:00.947584 1261197 cri.go:89] found id: ""
	I1217 00:56:00.947599 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.947607 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:00.947612 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:00.947675 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:00.974913 1261197 cri.go:89] found id: ""
	I1217 00:56:00.974929 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.974936 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:00.974941 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:00.975000 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:00.998262 1261197 cri.go:89] found id: ""
	I1217 00:56:00.998276 1261197 logs.go:282] 0 containers: []
	W1217 00:56:00.998284 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:00.998289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:00.998345 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:01.025055 1261197 cri.go:89] found id: ""
	I1217 00:56:01.025071 1261197 logs.go:282] 0 containers: []
	W1217 00:56:01.025079 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:01.025099 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:01.025110 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:01.080854 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:01.080873 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:01.095680 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:01.095698 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:01.174559 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:01.164757   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.165678   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.167766   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.168430   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:01.170271   15937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:01.174574 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:01.174587 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:01.240953 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:01.240973 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:03.778460 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.788536 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:03.788601 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:03.816065 1261197 cri.go:89] found id: ""
	I1217 00:56:03.816080 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.816087 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:03.816093 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:03.816158 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:03.840359 1261197 cri.go:89] found id: ""
	I1217 00:56:03.840373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.840381 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:03.840386 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:03.840443 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:03.865338 1261197 cri.go:89] found id: ""
	I1217 00:56:03.865351 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.865359 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:03.865364 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:03.865421 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:03.889916 1261197 cri.go:89] found id: ""
	I1217 00:56:03.889930 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.889937 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:03.889943 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:03.890011 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:03.913782 1261197 cri.go:89] found id: ""
	I1217 00:56:03.913796 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.913804 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:03.913815 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:03.913875 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:03.938356 1261197 cri.go:89] found id: ""
	I1217 00:56:03.938371 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.938379 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:03.938385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:03.938447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:03.963432 1261197 cri.go:89] found id: ""
	I1217 00:56:03.963446 1261197 logs.go:282] 0 containers: []
	W1217 00:56:03.963454 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:03.963461 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:03.963474 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:04.024730 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:04.024752 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:04.057316 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:04.057331 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:04.115813 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:04.115832 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:04.133889 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:04.133905 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:04.212782 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:04.204758   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.205392   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.206948   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.207288   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:04.208782   16060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.713766 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.723767 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:06.723837 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:06.747547 1261197 cri.go:89] found id: ""
	I1217 00:56:06.747561 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.747568 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:06.747574 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:06.747632 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:06.772850 1261197 cri.go:89] found id: ""
	I1217 00:56:06.772864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.772871 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:06.772877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:06.772942 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:06.797087 1261197 cri.go:89] found id: ""
	I1217 00:56:06.797101 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.797108 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:06.797113 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:06.797171 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:06.821815 1261197 cri.go:89] found id: ""
	I1217 00:56:06.821829 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.821836 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:06.821842 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:06.821906 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:06.850207 1261197 cri.go:89] found id: ""
	I1217 00:56:06.850221 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.850229 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:06.850234 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:06.850294 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:06.874139 1261197 cri.go:89] found id: ""
	I1217 00:56:06.874153 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.874160 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:06.874166 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:06.874224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:06.899438 1261197 cri.go:89] found id: ""
	I1217 00:56:06.899453 1261197 logs.go:282] 0 containers: []
	W1217 00:56:06.899461 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:06.899469 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:06.899480 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:06.967530 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:06.958975   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.959516   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961123   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.961674   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:06.963331   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:06.967542 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:06.967554 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:07.030281 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:07.030301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:07.062210 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:07.062226 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:07.121373 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:07.121391 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:09.638141 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.648301 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:09.648359 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:09.672936 1261197 cri.go:89] found id: ""
	I1217 00:56:09.672951 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.672959 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:09.672964 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:09.673022 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:09.697500 1261197 cri.go:89] found id: ""
	I1217 00:56:09.697513 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.697520 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:09.697526 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:09.697583 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:09.723330 1261197 cri.go:89] found id: ""
	I1217 00:56:09.723344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.723352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:09.723360 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:09.723423 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:09.747017 1261197 cri.go:89] found id: ""
	I1217 00:56:09.747032 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.747039 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:09.747044 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:09.747100 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:09.771652 1261197 cri.go:89] found id: ""
	I1217 00:56:09.771666 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.771673 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:09.771678 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:09.771737 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:09.799785 1261197 cri.go:89] found id: ""
	I1217 00:56:09.799799 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.799807 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:09.799812 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:09.799871 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:09.827063 1261197 cri.go:89] found id: ""
	I1217 00:56:09.827077 1261197 logs.go:282] 0 containers: []
	W1217 00:56:09.827085 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:09.827093 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:09.827103 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:09.894392 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:09.886579   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.887120   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.888619   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.889055   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:09.890605   16251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:09.894403 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:09.894413 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:09.955961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:09.955981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:09.982364 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:09.982380 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:10.051689 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:10.051709 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.568963 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.579001 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:12.579065 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:12.603247 1261197 cri.go:89] found id: ""
	I1217 00:56:12.603261 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.603269 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:12.603275 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:12.603332 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:12.635591 1261197 cri.go:89] found id: ""
	I1217 00:56:12.635606 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.635612 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:12.635617 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:12.635676 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:12.659802 1261197 cri.go:89] found id: ""
	I1217 00:56:12.659817 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.659824 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:12.659830 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:12.659887 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:12.684671 1261197 cri.go:89] found id: ""
	I1217 00:56:12.684684 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.684692 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:12.684697 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:12.684766 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:12.712570 1261197 cri.go:89] found id: ""
	I1217 00:56:12.712584 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.712606 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:12.712611 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:12.712668 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:12.739330 1261197 cri.go:89] found id: ""
	I1217 00:56:12.739345 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.739353 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:12.739358 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:12.739416 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:12.767372 1261197 cri.go:89] found id: ""
	I1217 00:56:12.767386 1261197 logs.go:282] 0 containers: []
	W1217 00:56:12.767393 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:12.767401 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:12.767411 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:12.822789 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:12.822807 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:12.839685 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:12.839702 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:12.916219 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:12.907759   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.908464   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910139   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.910712   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:12.912266   16361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:12.916230 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:12.916241 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:12.977800 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:12.977820 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.507621 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.518177 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:15.518240 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:15.544777 1261197 cri.go:89] found id: ""
	I1217 00:56:15.544792 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.544800 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:15.544806 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:15.544864 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:15.569420 1261197 cri.go:89] found id: ""
	I1217 00:56:15.569433 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.569441 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:15.569447 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:15.569505 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:15.594329 1261197 cri.go:89] found id: ""
	I1217 00:56:15.594344 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.594352 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:15.594357 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:15.594417 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:15.619820 1261197 cri.go:89] found id: ""
	I1217 00:56:15.619834 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.619842 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:15.619847 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:15.619911 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:15.645055 1261197 cri.go:89] found id: ""
	I1217 00:56:15.645076 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.645084 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:15.645090 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:15.645152 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:15.671575 1261197 cri.go:89] found id: ""
	I1217 00:56:15.671590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.671597 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:15.671602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:15.671667 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:15.700941 1261197 cri.go:89] found id: ""
	I1217 00:56:15.700955 1261197 logs.go:282] 0 containers: []
	W1217 00:56:15.700963 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:15.700971 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:15.700980 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:15.728886 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:15.728931 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:15.784718 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:15.784736 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:15.799312 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:15.799335 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:15.865192 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:15.855108   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.856459   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.858243   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.859523   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.860252   16479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:15.865203 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:15.865214 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.428562 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.438711 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:18.438772 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:18.465045 1261197 cri.go:89] found id: ""
	I1217 00:56:18.465060 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.465067 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:18.465073 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:18.465132 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:18.490715 1261197 cri.go:89] found id: ""
	I1217 00:56:18.490728 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.490736 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:18.490741 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:18.490799 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:18.519522 1261197 cri.go:89] found id: ""
	I1217 00:56:18.519536 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.519544 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:18.519549 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:18.519611 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:18.545098 1261197 cri.go:89] found id: ""
	I1217 00:56:18.545112 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.545119 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:18.545125 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:18.545183 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:18.570978 1261197 cri.go:89] found id: ""
	I1217 00:56:18.570993 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.571000 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:18.571005 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:18.571063 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:18.594800 1261197 cri.go:89] found id: ""
	I1217 00:56:18.594814 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.594822 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:18.594828 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:18.594884 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:18.618575 1261197 cri.go:89] found id: ""
	I1217 00:56:18.618589 1261197 logs.go:282] 0 containers: []
	W1217 00:56:18.618597 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:18.618604 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:18.618613 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:18.680474 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:18.680494 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:18.708635 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:18.708651 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:18.763927 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:18.763949 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:18.780209 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:18.780225 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:18.849998 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:18.840313   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.841037   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.842881   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.843469   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:18.844431   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.351687 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.362159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:21.362230 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:21.390614 1261197 cri.go:89] found id: ""
	I1217 00:56:21.390630 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.390637 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:21.390648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:21.390716 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:21.420609 1261197 cri.go:89] found id: ""
	I1217 00:56:21.420623 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.420630 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:21.420636 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:21.420703 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:21.446943 1261197 cri.go:89] found id: ""
	I1217 00:56:21.446957 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.446964 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:21.446970 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:21.447041 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:21.477813 1261197 cri.go:89] found id: ""
	I1217 00:56:21.477828 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.477835 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:21.477841 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:21.477901 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:21.504024 1261197 cri.go:89] found id: ""
	I1217 00:56:21.504058 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.504065 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:21.504071 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:21.504150 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:21.534132 1261197 cri.go:89] found id: ""
	I1217 00:56:21.534146 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.534154 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:21.534159 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:21.534222 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:21.558094 1261197 cri.go:89] found id: ""
	I1217 00:56:21.558113 1261197 logs.go:282] 0 containers: []
	W1217 00:56:21.558122 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:21.558130 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:21.558141 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:21.620436 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:21.620462 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:21.635283 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:21.635301 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:21.698118 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:21.689697   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.690323   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692017   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.692610   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:21.694317   16676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:21.698128 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:21.698139 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:21.760016 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:21.760037 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.289952 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.300354 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:24.300457 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:24.324823 1261197 cri.go:89] found id: ""
	I1217 00:56:24.324838 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.324846 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:24.324852 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:24.324921 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:24.349508 1261197 cri.go:89] found id: ""
	I1217 00:56:24.349522 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.349528 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:24.349534 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:24.349592 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:24.375701 1261197 cri.go:89] found id: ""
	I1217 00:56:24.375716 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.375723 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:24.375729 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:24.375791 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:24.412359 1261197 cri.go:89] found id: ""
	I1217 00:56:24.412373 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.412380 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:24.412385 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:24.412447 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:24.440423 1261197 cri.go:89] found id: ""
	I1217 00:56:24.440437 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.440444 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:24.440450 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:24.440511 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:24.471294 1261197 cri.go:89] found id: ""
	I1217 00:56:24.471308 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.471316 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:24.471322 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:24.471391 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:24.496845 1261197 cri.go:89] found id: ""
	I1217 00:56:24.496859 1261197 logs.go:282] 0 containers: []
	W1217 00:56:24.496866 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:24.496874 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:24.496892 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:24.526610 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:24.526627 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:24.583266 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:24.583327 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:24.598272 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:24.598288 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:24.660553 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:24.651754   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.652626   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654399   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.654924   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:24.656593   16790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:24.660563 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:24.660574 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:27.222739 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.232603 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:27.232662 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:27.259034 1261197 cri.go:89] found id: ""
	I1217 00:56:27.259048 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.259056 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:27.259061 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:27.259122 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:27.282406 1261197 cri.go:89] found id: ""
	I1217 00:56:27.282420 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.282427 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:27.282432 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:27.282490 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:27.306518 1261197 cri.go:89] found id: ""
	I1217 00:56:27.306532 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.306540 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:27.306545 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:27.306603 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:27.335278 1261197 cri.go:89] found id: ""
	I1217 00:56:27.335292 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.335299 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:27.335305 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:27.335363 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:27.359793 1261197 cri.go:89] found id: ""
	I1217 00:56:27.359808 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.359815 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:27.359829 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:27.359888 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:27.399251 1261197 cri.go:89] found id: ""
	I1217 00:56:27.399275 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.399283 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:27.399289 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:27.399355 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:27.426464 1261197 cri.go:89] found id: ""
	I1217 00:56:27.426477 1261197 logs.go:282] 0 containers: []
	W1217 00:56:27.426495 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:27.426503 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:27.426513 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:27.458980 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:27.458996 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:27.514403 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:27.514424 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:27.528951 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:27.528969 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:27.592165 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:27.584291   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.584882   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586421   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.586848   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:27.588335   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:27.592175 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:27.592187 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.157841 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.168783 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:30.168847 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:30.194237 1261197 cri.go:89] found id: ""
	I1217 00:56:30.194251 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.194259 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:30.194264 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:30.194329 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:30.220057 1261197 cri.go:89] found id: ""
	I1217 00:56:30.220072 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.220079 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:30.220084 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:30.220141 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:30.244965 1261197 cri.go:89] found id: ""
	I1217 00:56:30.244980 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.244987 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:30.244992 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:30.245051 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:30.269893 1261197 cri.go:89] found id: ""
	I1217 00:56:30.269907 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.269914 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:30.269919 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:30.269976 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:30.294384 1261197 cri.go:89] found id: ""
	I1217 00:56:30.294398 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.294406 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:30.294411 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:30.294469 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:30.325240 1261197 cri.go:89] found id: ""
	I1217 00:56:30.325254 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.325261 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:30.325266 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:30.325322 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:30.349591 1261197 cri.go:89] found id: ""
	I1217 00:56:30.349604 1261197 logs.go:282] 0 containers: []
	W1217 00:56:30.349611 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:30.349619 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:30.349629 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:30.409349 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:30.409368 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:30.426814 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:30.426833 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:30.497852 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:30.489815   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.490215   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.491858   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.492254   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:30.494012   16985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:30.497861 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:30.497872 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:30.559124 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:30.559146 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.090237 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.100535 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:33.100594 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:33.124070 1261197 cri.go:89] found id: ""
	I1217 00:56:33.124085 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.124092 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:33.124098 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:33.124155 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:33.148807 1261197 cri.go:89] found id: ""
	I1217 00:56:33.148821 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.148828 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:33.148833 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:33.148894 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:33.175576 1261197 cri.go:89] found id: ""
	I1217 00:56:33.175590 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.175597 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:33.175602 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:33.175660 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:33.200012 1261197 cri.go:89] found id: ""
	I1217 00:56:33.200026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.200033 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:33.200038 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:33.200095 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:33.224891 1261197 cri.go:89] found id: ""
	I1217 00:56:33.224921 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.224928 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:33.224933 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:33.225001 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:33.249021 1261197 cri.go:89] found id: ""
	I1217 00:56:33.249035 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.249043 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:33.249052 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:33.249108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:33.272696 1261197 cri.go:89] found id: ""
	I1217 00:56:33.272710 1261197 logs.go:282] 0 containers: []
	W1217 00:56:33.272717 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:33.272733 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:33.272743 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:33.333826 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:33.333848 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:33.363111 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:33.363134 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:33.426200 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:33.426219 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:33.444135 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:33.444152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:33.510910 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:33.502166   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.502968   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.504709   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.505302   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:33.506971   17105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.011142 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.023140 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:36.023216 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:36.051599 1261197 cri.go:89] found id: ""
	I1217 00:56:36.051614 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.051622 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:36.051628 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 00:56:36.051700 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:36.076217 1261197 cri.go:89] found id: ""
	I1217 00:56:36.076231 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.076239 1261197 logs.go:284] No container was found matching "etcd"
	I1217 00:56:36.076244 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 00:56:36.076305 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:36.104998 1261197 cri.go:89] found id: ""
	I1217 00:56:36.105026 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.105034 1261197 logs.go:284] No container was found matching "coredns"
	I1217 00:56:36.105039 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:36.105108 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:36.130127 1261197 cri.go:89] found id: ""
	I1217 00:56:36.130142 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.130149 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:36.130154 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:36.130224 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:36.155615 1261197 cri.go:89] found id: ""
	I1217 00:56:36.155629 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.155636 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:36.155648 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:36.155709 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:36.181850 1261197 cri.go:89] found id: ""
	I1217 00:56:36.181864 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.181872 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:36.181877 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:36.181937 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:36.208111 1261197 cri.go:89] found id: ""
	I1217 00:56:36.208126 1261197 logs.go:282] 0 containers: []
	W1217 00:56:36.208133 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:36.208141 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:36.208152 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:36.266007 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:36.266031 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:36.281259 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:36.281275 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:36.346325 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:36.337981   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.338678   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340157   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.340875   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:36.342538   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:36.346335 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 00:56:36.346345 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 00:56:36.412961 1261197 logs.go:123] Gathering logs for container status ...
	I1217 00:56:36.412981 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:38.945107 1261197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.955445 1261197 kubeadm.go:602] duration metric: took 4m3.371937848s to restartPrimaryControlPlane
	W1217 00:56:38.955509 1261197 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:56:38.955586 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 00:56:39.375604 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:56:39.388977 1261197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:56:39.396884 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:56:39.396954 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:56:39.404783 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:56:39.404792 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 00:56:39.404853 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:56:39.412686 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:56:39.412740 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:56:39.420350 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:56:39.427923 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:56:39.427975 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:56:39.435272 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.442721 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:56:39.442775 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:56:39.450389 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:56:39.458043 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:56:39.458098 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:56:39.465332 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:56:39.508240 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:56:39.508300 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:56:39.586995 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:56:39.587071 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:56:39.587116 1261197 kubeadm.go:319] OS: Linux
	I1217 00:56:39.587161 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:56:39.587217 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:56:39.587273 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:56:39.587330 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:56:39.587376 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:56:39.587433 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:56:39.587488 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:56:39.587544 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:56:39.587589 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:56:39.658303 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:56:39.658422 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:56:39.658518 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:56:39.670076 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:56:39.675448 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 00:56:39.675545 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:56:39.675618 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:56:39.675704 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:56:39.675774 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:56:39.675852 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:56:39.675914 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:56:39.675983 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:56:39.676053 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:56:39.676144 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:56:39.676224 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:56:39.676260 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:56:39.676329 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:56:39.801204 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:56:39.954898 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:56:40.065909 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:56:40.451062 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:56:40.596539 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:56:40.597062 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:56:40.600429 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:56:40.603602 1261197 out.go:252]   - Booting up control plane ...
	I1217 00:56:40.603714 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:56:40.603797 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:56:40.604963 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:56:40.625747 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:56:40.625851 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:56:40.633757 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:56:40.634255 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:56:40.634396 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:56:40.778162 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:56:40.778280 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:00:40.776324 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243331s
	I1217 01:00:40.776348 1261197 kubeadm.go:319] 
	I1217 01:00:40.776405 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:00:40.776437 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:00:40.776540 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:00:40.776544 1261197 kubeadm.go:319] 
	I1217 01:00:40.776648 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:00:40.776679 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:00:40.776709 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:00:40.776712 1261197 kubeadm.go:319] 
	I1217 01:00:40.780629 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:00:40.781051 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:00:40.781158 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:00:40.781394 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:00:40.781398 1261197 kubeadm.go:319] 
	I1217 01:00:40.781466 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:00:40.781578 1261197 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243331s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:00:40.781696 1261197 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:00:41.195061 1261197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:00:41.209438 1261197 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:00:41.209493 1261197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:00:41.218235 1261197 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:00:41.218244 1261197 kubeadm.go:158] found existing configuration files:
	
	I1217 01:00:41.218300 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:00:41.226394 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:00:41.226448 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:00:41.234445 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:00:41.242558 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:00:41.242613 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:00:41.250526 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.258573 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:00:41.258634 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:00:41.266278 1261197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:00:41.274420 1261197 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:00:41.274476 1261197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:00:41.281748 1261197 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:00:41.319491 1261197 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:00:41.319792 1261197 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:00:41.392691 1261197 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:00:41.392755 1261197 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:00:41.392789 1261197 kubeadm.go:319] OS: Linux
	I1217 01:00:41.392833 1261197 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:00:41.392880 1261197 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:00:41.392926 1261197 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:00:41.392972 1261197 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:00:41.393025 1261197 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:00:41.393072 1261197 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:00:41.393116 1261197 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:00:41.393163 1261197 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:00:41.393208 1261197 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:00:41.471655 1261197 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:00:41.471787 1261197 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:00:41.471905 1261197 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:00:41.482138 1261197 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:00:41.485739 1261197 out.go:252]   - Generating certificates and keys ...
	I1217 01:00:41.485837 1261197 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:00:41.485905 1261197 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:00:41.485986 1261197 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:00:41.486050 1261197 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:00:41.486123 1261197 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:00:41.486180 1261197 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:00:41.486253 1261197 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:00:41.486318 1261197 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:00:41.486396 1261197 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:00:41.486478 1261197 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:00:41.486522 1261197 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:00:41.486584 1261197 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:00:41.603323 1261197 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:00:41.901106 1261197 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:00:42.054265 1261197 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:00:42.414109 1261197 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:00:42.682518 1261197 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:00:42.683180 1261197 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:00:42.685848 1261197 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:00:42.689217 1261197 out.go:252]   - Booting up control plane ...
	I1217 01:00:42.689317 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:00:42.689401 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:00:42.689468 1261197 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:00:42.713083 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:00:42.713185 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:00:42.721813 1261197 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:00:42.722110 1261197 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:00:42.722158 1261197 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:00:42.862014 1261197 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:00:42.862133 1261197 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:04:42.862018 1261197 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284909s
	I1217 01:04:42.862056 1261197 kubeadm.go:319] 
	I1217 01:04:42.862124 1261197 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:04:42.862167 1261197 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:04:42.862279 1261197 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:04:42.862283 1261197 kubeadm.go:319] 
	I1217 01:04:42.862390 1261197 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:04:42.862421 1261197 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:04:42.862451 1261197 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:04:42.862457 1261197 kubeadm.go:319] 
	I1217 01:04:42.866725 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:04:42.867116 1261197 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:04:42.867218 1261197 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:04:42.867438 1261197 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:04:42.867443 1261197 kubeadm.go:319] 
	I1217 01:04:42.867507 1261197 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:04:42.867593 1261197 kubeadm.go:403] duration metric: took 12m7.31765155s to StartCluster
	I1217 01:04:42.867623 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:04:42.867685 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:04:42.892141 1261197 cri.go:89] found id: ""
	I1217 01:04:42.892155 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.892162 1261197 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:04:42.892167 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:04:42.892231 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:04:42.916795 1261197 cri.go:89] found id: ""
	I1217 01:04:42.916809 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.916817 1261197 logs.go:284] No container was found matching "etcd"
	I1217 01:04:42.916822 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:04:42.916879 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:04:42.945762 1261197 cri.go:89] found id: ""
	I1217 01:04:42.945776 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.945783 1261197 logs.go:284] No container was found matching "coredns"
	I1217 01:04:42.945794 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:04:42.945850 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:04:42.970080 1261197 cri.go:89] found id: ""
	I1217 01:04:42.970094 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.970100 1261197 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:04:42.970105 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:04:42.970161 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:04:42.994293 1261197 cri.go:89] found id: ""
	I1217 01:04:42.994307 1261197 logs.go:282] 0 containers: []
	W1217 01:04:42.994314 1261197 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:04:42.994319 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:04:42.994375 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:04:43.019856 1261197 cri.go:89] found id: ""
	I1217 01:04:43.019871 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.019879 1261197 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:04:43.019884 1261197 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:04:43.019980 1261197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:04:43.044643 1261197 cri.go:89] found id: ""
	I1217 01:04:43.044657 1261197 logs.go:282] 0 containers: []
	W1217 01:04:43.044664 1261197 logs.go:284] No container was found matching "kindnet"
	I1217 01:04:43.044672 1261197 logs.go:123] Gathering logs for kubelet ...
	I1217 01:04:43.044682 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:04:43.100644 1261197 logs.go:123] Gathering logs for dmesg ...
	I1217 01:04:43.100662 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:04:43.115507 1261197 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:04:43.115524 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:04:43.206420 1261197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:04:43.197597   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.198381   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.199999   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.200494   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:04:43.202136   20965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:04:43.206430 1261197 logs.go:123] Gathering logs for containerd ...
	I1217 01:04:43.206440 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:04:43.268190 1261197 logs.go:123] Gathering logs for container status ...
	I1217 01:04:43.268210 1261197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 01:04:43.298717 1261197 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:04:43.298758 1261197 out.go:285] * 
	W1217 01:04:43.298817 1261197 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.298838 1261197 out.go:285] * 
	W1217 01:04:43.301057 1261197 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:04:43.305981 1261197 out.go:203] 
	W1217 01:04:43.308777 1261197 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:04:43.308838 1261197 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:04:43.308858 1261197 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:04:43.311954 1261197 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243334749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243430323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243566916Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243654818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243723127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243793503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243862870Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.243933976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244147632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244278333Z" level=info msg="Connect containerd service"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.244760505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.246010456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.254958867Z" level=info msg="Start subscribing containerd event"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255148908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255207460Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.255280454Z" level=info msg="Start recovering state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.295825702Z" level=info msg="Start event monitor"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296048071Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296114033Z" level=info msg="Start streaming server"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296179503Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296236685Z" level=info msg="runtime interface starting up..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296301301Z" level=info msg="starting plugins..."
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296367492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:52:34 functional-608344 containerd[9745]: time="2025-12-17T00:52:34.296572451Z" level=info msg="containerd successfully booted in 0.086094s"
	Dec 17 00:52:34 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:06:37.805126   22391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:37.805553   22391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:37.806809   22391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:37.807215   22391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:06:37.808854   22391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:06:37 up  6:49,  0 user,  load average: 0.20, 0.21, 0.48
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:06:34 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:34 functional-608344 kubelet[22274]: E1217 01:06:34.912365   22274 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:34 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:34 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:35 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 17 01:06:35 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:35 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:35 functional-608344 kubelet[22280]: E1217 01:06:35.667680   22280 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:35 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:35 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:36 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 17 01:06:36 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:36 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:36 functional-608344 kubelet[22285]: E1217 01:06:36.433931   22285 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:36 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:36 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:37 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 17 01:06:37 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:37 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:37 functional-608344 kubelet[22306]: E1217 01:06:37.192976   22306 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:06:37 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:06:37 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:06:37 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 473.
	Dec 17 01:06:37 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:06:37 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (382.323384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:05:00.772449 1211243 retry.go:31] will retry after 4.006533315s: Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 01:05:09.433496 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:05:14.780025 1211243 retry.go:31] will retry after 3.017938229s: Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:05:27.798891 1211243 retry.go:31] will retry after 6.289457532s: Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:05:44.089725 1211243 retry.go:31] will retry after 9.250150169s: Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:06:03.340888 1211243 retry.go:31] will retry after 22.701272706s: Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 01:06:56.877289 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 01:08:12.511448 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (315.587241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (321.298064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-608344 image ls                                                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image save --daemon kicbase/echo-server:functional-608344 --alsologtostderr                                                               │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /etc/ssl/certs/1211243.pem                                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /usr/share/ca-certificates/1211243.pem                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /etc/ssl/certs/12112432.pem                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /usr/share/ca-certificates/12112432.pem                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh sudo cat /etc/test/nested/copy/1211243/hosts                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ cp             │ functional-608344 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh -n functional-608344 sudo cat /home/docker/cp-test.txt                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ cp             │ functional-608344 cp functional-608344:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp669975446/001/cp-test.txt │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh -n functional-608344 sudo cat /home/docker/cp-test.txt                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ cp             │ functional-608344 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh -n functional-608344 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image ls --format short --alsologtostderr                                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image ls --format yaml --alsologtostderr                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh            │ functional-608344 ssh pgrep buildkitd                                                                                                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │                     │
	│ image          │ functional-608344 image build -t localhost/my-image:functional-608344 testdata/build --alsologtostderr                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image ls                                                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image ls --format json --alsologtostderr                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image          │ functional-608344 image ls --format table --alsologtostderr                                                                                                 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ update-context │ functional-608344 update-context --alsologtostderr -v=2                                                                                                     │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ update-context │ functional-608344 update-context --alsologtostderr -v=2                                                                                                     │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ update-context │ functional-608344 update-context --alsologtostderr -v=2                                                                                                     │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:06:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:06:52.678226 1278041 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:06:52.678416 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678444 1278041 out.go:374] Setting ErrFile to fd 2...
	I1217 01:06:52.678471 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678859 1278041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:06:52.679342 1278041 out.go:368] Setting JSON to false
	I1217 01:06:52.680559 1278041 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24563,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:06:52.680658 1278041 start.go:143] virtualization:  
	I1217 01:06:52.683773 1278041 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:06:52.687734 1278041 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:06:52.687819 1278041 notify.go:221] Checking for updates...
	I1217 01:06:52.693713 1278041 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:06:52.696651 1278041 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:06:52.699691 1278041 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:06:52.702755 1278041 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:06:52.705750 1278041 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:06:52.709191 1278041 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:06:52.709996 1278041 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:06:52.733434 1278041 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:06:52.733546 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.795063 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.785060306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.795175 1278041 docker.go:319] overlay module found
	I1217 01:06:52.798295 1278041 out.go:179] * Using the docker driver based on existing profile
	I1217 01:06:52.801102 1278041 start.go:309] selected driver: docker
	I1217 01:06:52.801151 1278041 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.801250 1278041 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:06:52.801362 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.856592 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.847153487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.857047 1278041 cni.go:84] Creating CNI manager for ""
	I1217 01:06:52.857113 1278041 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:06:52.857152 1278041 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.860228 1278041 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.658919249Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.659463878Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.693743928Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.696483463Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.698783403Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.707804010Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\" returns successfully"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.934527733Z" level=info msg="No images store for sha256:5afb1ac715edcec0343c29ae3cc508321a1981c1d2c82f12ea8b0d9e6a689035"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.936787763Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.943987402Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.944604229Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.750640924Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.753057633Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.755043100Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.765033515Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\" returns successfully"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.441783244Z" level=info msg="No images store for sha256:8827e8c33a0cd121f298af02732dd52aa22219d3352dc8c70e08e7aaf878a2ed"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.443985436Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.454169093Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.454828021Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:20 functional-608344 containerd[9745]: time="2025-12-17T01:07:20.732016699Z" level=info msg="connecting to shim pxmrc46yam3461jaz3f8cvlqo" address="unix:///run/containerd/s/80dcfe735756dfe67eb0cfb3b5fdb61d413240e3db4bf61daa28483a099c8c13" namespace=k8s.io protocol=ttrpc version=3
	Dec 17 01:07:20 functional-608344 containerd[9745]: time="2025-12-17T01:07:20.806686924Z" level=info msg="shim disconnected" id=pxmrc46yam3461jaz3f8cvlqo namespace=k8s.io
	Dec 17 01:07:20 functional-608344 containerd[9745]: time="2025-12-17T01:07:20.806727015Z" level=info msg="cleaning up after shim disconnected" id=pxmrc46yam3461jaz3f8cvlqo namespace=k8s.io
	Dec 17 01:07:20 functional-608344 containerd[9745]: time="2025-12-17T01:07:20.806737493Z" level=info msg="cleaning up dead shim" id=pxmrc46yam3461jaz3f8cvlqo namespace=k8s.io
	Dec 17 01:07:21 functional-608344 containerd[9745]: time="2025-12-17T01:07:21.097608405Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-608344\""
	Dec 17 01:07:21 functional-608344 containerd[9745]: time="2025-12-17T01:07:21.106216343Z" level=info msg="ImageCreate event name:\"sha256:98ff2221b87adb56f52833ed422c8ee7c7eb4b9fdd0bc044cd8cc5ac84f0bcf6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:21 functional-608344 containerd[9745]: time="2025-12-17T01:07:21.106581270Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:08:51.685686   25147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:51.686172   25147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:51.687973   25147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:51.688699   25147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:51.690431   25147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:08:51 up  6:51,  0 user,  load average: 0.24, 0.28, 0.47
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:08:48 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 17 01:08:49 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:49 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:49 functional-608344 kubelet[25020]: E1217 01:08:49.170841   25020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 17 01:08:49 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:49 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:49 functional-608344 kubelet[25025]: E1217 01:08:49.918964   25025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:49 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:50 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 17 01:08:50 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:50 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:50 functional-608344 kubelet[25038]: E1217 01:08:50.689191   25038 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:50 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:50 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:51 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 17 01:08:51 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:51 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:51 functional-608344 kubelet[25080]: E1217 01:08:51.435630   25080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:51 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:51 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (318.864389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-608344 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-608344 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (63.109332ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-608344 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	        "Created": "2025-12-17T00:37:51.919492207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:37:51.980484436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
	        "LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
	        "Name": "/functional-608344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-608344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-608344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
	                "LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-608344",
	                "Source": "/var/lib/docker/volumes/functional-608344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-608344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-608344",
	                "name.minikube.sigs.k8s.io": "functional-608344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
	            "SandboxKey": "/var/run/docker/netns/1788902206da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-608344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:51:82:0a:0a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
	                    "EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-608344",
	                        "c4b80a2791ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 2 (317.306993ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount2 --alsologtostderr -v=1                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ mount     │ -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount3 --alsologtostderr -v=1                            │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ ssh       │ functional-608344 ssh findmnt -T /mount1                                                                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh findmnt -T /mount2                                                                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ ssh       │ functional-608344 ssh findmnt -T /mount3                                                                                                                        │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:06 UTC │
	│ mount     │ -p functional-608344 --kill=true                                                                                                                                │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ start     │ -p functional-608344 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-608344 --alsologtostderr -v=1                                                                                                  │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 01:06 UTC │ 17 Dec 25 01:07 UTC │
	│ ssh       │ functional-608344 ssh sudo systemctl is-active docker                                                                                                           │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │                     │
	│ ssh       │ functional-608344 ssh sudo systemctl is-active crio                                                                                                             │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │                     │
	│ image     │ functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image ls                                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image ls                                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image ls                                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image save kicbase/echo-server:functional-608344 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image rm kicbase/echo-server:functional-608344 --alsologtostderr                                                                              │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image ls                                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image ls                                                                                                                                      │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	│ image     │ functional-608344 image save --daemon kicbase/echo-server:functional-608344 --alsologtostderr                                                                   │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 01:07 UTC │ 17 Dec 25 01:07 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:06:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:06:52.678226 1278041 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:06:52.678416 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678444 1278041 out.go:374] Setting ErrFile to fd 2...
	I1217 01:06:52.678471 1278041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.678859 1278041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:06:52.679342 1278041 out.go:368] Setting JSON to false
	I1217 01:06:52.680559 1278041 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24563,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:06:52.680658 1278041 start.go:143] virtualization:  
	I1217 01:06:52.683773 1278041 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:06:52.687734 1278041 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:06:52.687819 1278041 notify.go:221] Checking for updates...
	I1217 01:06:52.693713 1278041 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:06:52.696651 1278041 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:06:52.699691 1278041 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:06:52.702755 1278041 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:06:52.705750 1278041 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:06:52.709191 1278041 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:06:52.709996 1278041 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:06:52.733434 1278041 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:06:52.733546 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.795063 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.785060306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.795175 1278041 docker.go:319] overlay module found
	I1217 01:06:52.798295 1278041 out.go:179] * Using the docker driver based on existing profile
	I1217 01:06:52.801102 1278041 start.go:309] selected driver: docker
	I1217 01:06:52.801151 1278041 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.801250 1278041 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:06:52.801362 1278041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.856592 1278041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.847153487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.857047 1278041 cni.go:84] Creating CNI manager for ""
	I1217 01:06:52.857113 1278041 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:06:52.857152 1278041 start.go:353] cluster config:
	{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.860228 1278041 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:07:07 functional-608344 containerd[9745]: time="2025-12-17T01:07:07.581084839Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.376778803Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\""
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.379563106Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.381740993Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.390401818Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\" returns successfully"
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.642288224Z" level=info msg="No images store for sha256:5afb1ac715edcec0343c29ae3cc508321a1981c1d2c82f12ea8b0d9e6a689035"
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.644578515Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.658919249Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:08 functional-608344 containerd[9745]: time="2025-12-17T01:07:08.659463878Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.693743928Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.696483463Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.698783403Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.707804010Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\" returns successfully"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.934527733Z" level=info msg="No images store for sha256:5afb1ac715edcec0343c29ae3cc508321a1981c1d2c82f12ea8b0d9e6a689035"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.936787763Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.943987402Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:09 functional-608344 containerd[9745]: time="2025-12-17T01:07:09.944604229Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.750640924Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.753057633Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.755043100Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 01:07:10 functional-608344 containerd[9745]: time="2025-12-17T01:07:10.765033515Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-608344\" returns successfully"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.441783244Z" level=info msg="No images store for sha256:8827e8c33a0cd121f298af02732dd52aa22219d3352dc8c70e08e7aaf878a2ed"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.443985436Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-608344\""
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.454169093Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:07:11 functional-608344 containerd[9745]: time="2025-12-17T01:07:11.454828021Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-608344\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:07:13.025580   23869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:13.026221   23869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:13.027836   23869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:13.028335   23869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:13.029946   23869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:07:13 up  6:49,  0 user,  load average: 0.51, 0.30, 0.50
	Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:07:09 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 17 01:07:10 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:10 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:10 functional-608344 kubelet[23606]: E1217 01:07:10.183362   23606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 17 01:07:10 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:10 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:10 functional-608344 kubelet[23670]: E1217 01:07:10.928868   23670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:10 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:11 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 17 01:07:11 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:11 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:11 functional-608344 kubelet[23723]: E1217 01:07:11.666050   23723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:11 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:11 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:12 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 17 01:07:12 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:12 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:12 functional-608344 kubelet[23783]: E1217 01:07:12.432893   23783 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:12 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:12 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 2 (322.745997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1217 01:04:50.289435 1273970 out.go:360] Setting OutFile to fd 1 ...
I1217 01:04:50.298731 1273970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:04:50.298750 1273970 out.go:374] Setting ErrFile to fd 2...
I1217 01:04:50.298757 1273970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:04:50.299090 1273970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:04:50.299447 1273970 mustload.go:66] Loading cluster: functional-608344
I1217 01:04:50.299960 1273970 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:04:50.303792 1273970 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:04:50.326310 1273970 host.go:66] Checking if "functional-608344" exists ...
I1217 01:04:50.326655 1273970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:04:50.439969 1273970 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:04:50.429112017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:04:50.440129 1273970 api_server.go:166] Checking apiserver status ...
I1217 01:04:50.440195 1273970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 01:04:50.440243 1273970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:04:50.468053 1273970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
W1217 01:04:50.572094 1273970 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 01:04:50.575483 1273970 out.go:179] * The control-plane node functional-608344 apiserver is not running: (state=Stopped)
I1217 01:04:50.578561 1273970 out.go:179]   To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
stdout: * The control-plane node functional-608344 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-608344"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1273969: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-608344 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-608344 apply -f testdata/testsvc.yaml: exit status 1 (78.954084ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-608344 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (105.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.106.82.160": Temporary Error: Get "http://10.106.82.160": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-608344 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-608344 get svc nginx-svc: exit status 1 (61.56593ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-608344 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (105.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-608344 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-608344 create deployment hello-node --image kicbase/echo-server: exit status 1 (53.536595ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-608344 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 service list: exit status 103 (245.307123ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-608344 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-608344 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-608344 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-608344\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 service list -o json: exit status 103 (250.942838ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-608344 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-608344 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 service --namespace=default --https --url hello-node: exit status 103 (279.38853ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-608344 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-608344 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 service hello-node --url --format={{.IP}}: exit status 103 (248.42857ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-608344 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-608344 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-608344 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-608344\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 service hello-node --url: exit status 103 (280.312528ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-608344 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-608344"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-608344 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-608344 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-608344"
functional_test.go:1579: failed to parse "* The control-plane node functional-608344 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-608344\"": parse "* The control-plane node functional-608344 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-608344\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765933603668091719" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765933603668091719" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765933603668091719" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001/test-1765933603668091719
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.1865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 01:06:44.021540 1211243 retry.go:31] will retry after 258.172488ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 01:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 01:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 01:06 test-1765933603668091719
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh cat /mount-9p/test-1765933603668091719
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-608344 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-608344 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (88.411086ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-608344 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (295.237875ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=46267)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 17 01:06 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 17 01:06 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 17 01:06 test-1765933603668091719
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-608344 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46267
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001:/mount-9p --alsologtostderr -v=1] stderr:
I1217 01:06:43.746795 1276166 out.go:360] Setting OutFile to fd 1 ...
I1217 01:06:43.747135 1276166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:06:43.747158 1276166 out.go:374] Setting ErrFile to fd 2...
I1217 01:06:43.747173 1276166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:06:43.747434 1276166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:06:43.747730 1276166 mustload.go:66] Loading cluster: functional-608344
I1217 01:06:43.748130 1276166 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:06:43.748623 1276166 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:06:43.767495 1276166 host.go:66] Checking if "functional-608344" exists ...
I1217 01:06:43.767796 1276166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:06:43.851648 1276166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:43.842301783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:06:43.851821 1276166 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:06:43.897877 1276166 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001 into VM as /mount-9p ...
I1217 01:06:43.901529 1276166 out.go:179]   - Mount type:   9p
I1217 01:06:43.907318 1276166 out.go:179]   - User ID:      docker
I1217 01:06:43.910169 1276166 out.go:179]   - Group ID:     docker
I1217 01:06:43.913607 1276166 out.go:179]   - Version:      9p2000.L
I1217 01:06:43.916522 1276166 out.go:179]   - Message Size: 262144
I1217 01:06:43.919914 1276166 out.go:179]   - Options:      map[]
I1217 01:06:43.922764 1276166 out.go:179]   - Bind Address: 192.168.49.1:46267
I1217 01:06:43.925448 1276166 out.go:179] * Userspace file server: 
I1217 01:06:43.925881 1276166 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 01:06:43.925989 1276166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:06:43.950609 1276166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:06:44.045357 1276166 mount.go:180] unmount for /mount-9p ran successfully
I1217 01:06:44.045387 1276166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1217 01:06:44.054546 1276166 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46267,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1217 01:06:44.065060 1276166 main.go:127] stdlog: ufs.go:141 connected
I1217 01:06:44.065231 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tversion tag 65535 msize 262144 version '9P2000.L'
I1217 01:06:44.065275 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rversion tag 65535 msize 262144 version '9P2000'
I1217 01:06:44.065490 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1217 01:06:44.065573 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rattach tag 0 aqid (15c3f62 29d85b50 'd')
I1217 01:06:44.066282 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 0
I1217 01:06:44.066343 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3f62 29d85b50 'd') m d775 at 0 mt 1765933603 l 4096 t 0 d 0 ext )
I1217 01:06:44.070694 1276166 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/.mount-process: {Name:mk379dba9136cfca842320dc37e239bb904b662e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:06:44.070889 1276166 mount.go:105] mount successful: ""
I1217 01:06:44.074280 1276166 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3902487101/001 to /mount-9p
I1217 01:06:44.077193 1276166 out.go:203] 
I1217 01:06:44.080056 1276166 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1217 01:06:44.818104 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 0
I1217 01:06:44.818200 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3f62 29d85b50 'd') m d775 at 0 mt 1765933603 l 4096 t 0 d 0 ext )
I1217 01:06:44.818566 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 1 
I1217 01:06:44.818606 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 
I1217 01:06:44.818749 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Topen tag 0 fid 1 mode 0
I1217 01:06:44.818820 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Ropen tag 0 qid (15c3f62 29d85b50 'd') iounit 0
I1217 01:06:44.818935 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 0
I1217 01:06:44.818975 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3f62 29d85b50 'd') m d775 at 0 mt 1765933603 l 4096 t 0 d 0 ext )
I1217 01:06:44.819136 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 0 count 262120
I1217 01:06:44.819259 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 258
I1217 01:06:44.819395 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 261862
I1217 01:06:44.819424 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:44.819538 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:06:44.819589 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:44.819740 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 01:06:44.819786 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f63 29d85b50 '') 
I1217 01:06:44.819895 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.819941 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3f63 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.820089 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.820129 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3f63 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.820258 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:44.820294 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:44.820436 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'test-1765933603668091719' 
I1217 01:06:44.820468 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f65 29d85b50 '') 
I1217 01:06:44.820594 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.820630 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.820752 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.820785 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.820897 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:44.820916 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:44.821063 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 01:06:44.821104 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f64 29d85b50 '') 
I1217 01:06:44.821221 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.821259 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3f64 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.821394 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:44.821425 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3f64 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:44.821537 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:44.821562 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:44.821696 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:06:44.821733 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:44.821884 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 1
I1217 01:06:44.821926 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.139346 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 1 0:'test-1765933603668091719' 
I1217 01:06:45.139442 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f65 29d85b50 '') 
I1217 01:06:45.139755 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 1
I1217 01:06:45.139827 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.140097 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 1 newfid 2 
I1217 01:06:45.140154 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 
I1217 01:06:45.140412 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Topen tag 0 fid 2 mode 0
I1217 01:06:45.140501 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Ropen tag 0 qid (15c3f65 29d85b50 '') iounit 0
I1217 01:06:45.140980 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 1
I1217 01:06:45.141094 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.141532 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 2 offset 0 count 262120
I1217 01:06:45.141616 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 24
I1217 01:06:45.141878 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 2 offset 24 count 262120
I1217 01:06:45.141918 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:45.142156 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 2 offset 24 count 262120
I1217 01:06:45.142225 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:45.142495 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:45.142542 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.142897 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 1
I1217 01:06:45.142942 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.544629 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 0
I1217 01:06:45.544709 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3f62 29d85b50 'd') m d775 at 0 mt 1765933603 l 4096 t 0 d 0 ext )
I1217 01:06:45.545122 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 1 
I1217 01:06:45.545177 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 
I1217 01:06:45.545323 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Topen tag 0 fid 1 mode 0
I1217 01:06:45.545393 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Ropen tag 0 qid (15c3f62 29d85b50 'd') iounit 0
I1217 01:06:45.545518 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 0
I1217 01:06:45.545584 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3f62 29d85b50 'd') m d775 at 0 mt 1765933603 l 4096 t 0 d 0 ext )
I1217 01:06:45.545791 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 0 count 262120
I1217 01:06:45.545901 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 258
I1217 01:06:45.546047 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 261862
I1217 01:06:45.546080 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:45.546220 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:06:45.546246 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:45.546375 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 01:06:45.546407 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f63 29d85b50 '') 
I1217 01:06:45.546525 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.546561 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3f63 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.546680 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.546721 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3f63 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.546852 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:45.546875 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.547005 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'test-1765933603668091719' 
I1217 01:06:45.547038 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f65 29d85b50 '') 
I1217 01:06:45.547169 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.547202 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.547312 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.547346 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('test-1765933603668091719' 'jenkins' 'jenkins' '' q (15c3f65 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.547475 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:45.547497 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.547624 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 01:06:45.547657 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rwalk tag 0 (15c3f64 29d85b50 '') 
I1217 01:06:45.547773 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.547817 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3f64 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.547936 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tstat tag 0 fid 2
I1217 01:06:45.547971 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3f64 29d85b50 '') m 644 at 0 mt 1765933603 l 24 t 0 d 0 ext )
I1217 01:06:45.548151 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 2
I1217 01:06:45.548194 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.548329 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:06:45.548363 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rread tag 0 count 0
I1217 01:06:45.548532 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 1
I1217 01:06:45.548596 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.549989 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1217 01:06:45.550060 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rerror tag 0 ename 'file not found' ecode 0
I1217 01:06:45.802850 1276166 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36690 Tclunk tag 0 fid 0
I1217 01:06:45.802902 1276166 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36690 Rclunk tag 0
I1217 01:06:45.803978 1276166 main.go:127] stdlog: ufs.go:147 disconnected
I1217 01:06:45.826508 1276166 out.go:179] * Unmounting /mount-9p ...
I1217 01:06:45.829617 1276166 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 01:06:45.837158 1276166 mount.go:180] unmount for /mount-9p ran successfully
I1217 01:06:45.837278 1276166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/.mount-process: {Name:mk379dba9136cfca842320dc37e239bb904b662e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:06:45.840319 1276166 out.go:203] 
W1217 01:06:45.843367 1276166 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1217 01:06:45.846276 1276166 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (796.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-916713 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-916713 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.105684273s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-916713
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-916713: (1.33257762s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-916713 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-916713 status --format={{.Host}}: exit status 7 (70.757583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-916713 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 01:36:56.876753 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-916713 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m32.693184482s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-916713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-916713" primary control-plane node in "kubernetes-upgrade-916713" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:36:45.058835 1408373 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:36:45.059032 1408373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:36:45.059046 1408373 out.go:374] Setting ErrFile to fd 2...
	I1217 01:36:45.059051 1408373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:36:45.059341 1408373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:36:45.059810 1408373 out.go:368] Setting JSON to false
	I1217 01:36:45.060852 1408373 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26355,"bootTime":1765909050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:36:45.060944 1408373 start.go:143] virtualization:  
	I1217 01:36:45.064304 1408373 out.go:179] * [kubernetes-upgrade-916713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:36:45.068471 1408373 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:36:45.068746 1408373 notify.go:221] Checking for updates...
	I1217 01:36:45.075132 1408373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:36:45.078411 1408373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:36:45.081711 1408373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:36:45.085029 1408373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:36:45.088189 1408373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:36:45.091888 1408373 config.go:182] Loaded profile config "kubernetes-upgrade-916713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1217 01:36:45.092734 1408373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:36:45.142393 1408373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:36:45.142542 1408373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:36:45.245108 1408373 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:36:45.232584351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:36:45.245256 1408373 docker.go:319] overlay module found
	I1217 01:36:45.250486 1408373 out.go:179] * Using the docker driver based on existing profile
	I1217 01:36:45.253449 1408373 start.go:309] selected driver: docker
	I1217 01:36:45.253503 1408373 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-916713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-916713 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:36:45.253620 1408373 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:36:45.254410 1408373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:36:45.327011 1408373 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:36:45.317075775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:36:45.327367 1408373 cni.go:84] Creating CNI manager for ""
	I1217 01:36:45.327432 1408373 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:36:45.327477 1408373 start.go:353] cluster config:
	{Name:kubernetes-upgrade-916713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-916713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:36:45.330589 1408373 out.go:179] * Starting "kubernetes-upgrade-916713" primary control-plane node in "kubernetes-upgrade-916713" cluster
	I1217 01:36:45.333445 1408373 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:36:45.338631 1408373 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:36:45.341596 1408373 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:36:45.341689 1408373 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:36:45.341700 1408373 cache.go:65] Caching tarball of preloaded images
	I1217 01:36:45.341707 1408373 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:36:45.341795 1408373 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:36:45.341806 1408373 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:36:45.341921 1408373 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/config.json ...
	I1217 01:36:45.363064 1408373 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:36:45.363093 1408373 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:36:45.363117 1408373 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:36:45.363152 1408373 start.go:360] acquireMachinesLock for kubernetes-upgrade-916713: {Name:mkb7c7c58bd554cbfe7b05057b4ff9a4fff5ea69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:36:45.363232 1408373 start.go:364] duration metric: took 56.575µs to acquireMachinesLock for "kubernetes-upgrade-916713"
	I1217 01:36:45.363256 1408373 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:36:45.363265 1408373 fix.go:54] fixHost starting: 
	I1217 01:36:45.363532 1408373 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-916713 --format={{.State.Status}}
	I1217 01:36:45.383314 1408373 fix.go:112] recreateIfNeeded on kubernetes-upgrade-916713: state=Stopped err=<nil>
	W1217 01:36:45.383344 1408373 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:36:45.386527 1408373 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-916713" ...
	I1217 01:36:45.386634 1408373 cli_runner.go:164] Run: docker start kubernetes-upgrade-916713
	I1217 01:36:45.662308 1408373 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-916713 --format={{.State.Status}}
	I1217 01:36:45.682457 1408373 kic.go:430] container "kubernetes-upgrade-916713" state is running.
	I1217 01:36:45.682866 1408373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-916713
	I1217 01:36:45.708303 1408373 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/config.json ...
	I1217 01:36:45.708577 1408373 machine.go:94] provisionDockerMachine start ...
	I1217 01:36:45.708655 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:45.731589 1408373 main.go:143] libmachine: Using SSH client type: native
	I1217 01:36:45.731948 1408373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34168 <nil> <nil>}
	I1217 01:36:45.731966 1408373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:36:45.732707 1408373 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:36:48.865183 1408373 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-916713
	
	I1217 01:36:48.865208 1408373 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-916713"
	I1217 01:36:48.865272 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:48.887396 1408373 main.go:143] libmachine: Using SSH client type: native
	I1217 01:36:48.887714 1408373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34168 <nil> <nil>}
	I1217 01:36:48.887730 1408373 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-916713 && echo "kubernetes-upgrade-916713" | sudo tee /etc/hostname
	I1217 01:36:49.028007 1408373 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-916713
	
	I1217 01:36:49.028117 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.045984 1408373 main.go:143] libmachine: Using SSH client type: native
	I1217 01:36:49.046314 1408373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34168 <nil> <nil>}
	I1217 01:36:49.046336 1408373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-916713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-916713/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-916713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:36:49.182015 1408373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:36:49.182060 1408373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:36:49.182086 1408373 ubuntu.go:190] setting up certificates
	I1217 01:36:49.182103 1408373 provision.go:84] configureAuth start
	I1217 01:36:49.182167 1408373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-916713
	I1217 01:36:49.200631 1408373 provision.go:143] copyHostCerts
	I1217 01:36:49.200715 1408373 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:36:49.200729 1408373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:36:49.200807 1408373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:36:49.200921 1408373 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:36:49.200932 1408373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:36:49.200961 1408373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:36:49.201027 1408373 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:36:49.201036 1408373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:36:49.201060 1408373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:36:49.201122 1408373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-916713 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-916713 localhost minikube]
	I1217 01:36:49.405595 1408373 provision.go:177] copyRemoteCerts
	I1217 01:36:49.405699 1408373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:36:49.405739 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.422989 1408373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/kubernetes-upgrade-916713/id_rsa Username:docker}
	I1217 01:36:49.517343 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:36:49.536088 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 01:36:49.553477 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:36:49.571888 1408373 provision.go:87] duration metric: took 389.766216ms to configureAuth
	I1217 01:36:49.571915 1408373 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:36:49.572105 1408373 config.go:182] Loaded profile config "kubernetes-upgrade-916713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:36:49.572121 1408373 machine.go:97] duration metric: took 3.863526577s to provisionDockerMachine
	I1217 01:36:49.572129 1408373 start.go:293] postStartSetup for "kubernetes-upgrade-916713" (driver="docker")
	I1217 01:36:49.572141 1408373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:36:49.572194 1408373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:36:49.572243 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.589482 1408373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/kubernetes-upgrade-916713/id_rsa Username:docker}
	I1217 01:36:49.689476 1408373 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:36:49.692620 1408373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:36:49.692652 1408373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:36:49.692663 1408373 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:36:49.692743 1408373 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:36:49.692861 1408373 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:36:49.692974 1408373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:36:49.700288 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:36:49.717132 1408373 start.go:296] duration metric: took 144.98709ms for postStartSetup
	I1217 01:36:49.717213 1408373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:36:49.717275 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.733835 1408373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/kubernetes-upgrade-916713/id_rsa Username:docker}
	I1217 01:36:49.827804 1408373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:36:49.832813 1408373 fix.go:56] duration metric: took 4.469539814s for fixHost
	I1217 01:36:49.832841 1408373 start.go:83] releasing machines lock for "kubernetes-upgrade-916713", held for 4.469597218s
	I1217 01:36:49.832916 1408373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-916713
	I1217 01:36:49.852358 1408373 ssh_runner.go:195] Run: cat /version.json
	I1217 01:36:49.852437 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.852739 1408373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:36:49.852799 1408373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-916713
	I1217 01:36:49.877448 1408373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/kubernetes-upgrade-916713/id_rsa Username:docker}
	I1217 01:36:49.885388 1408373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/kubernetes-upgrade-916713/id_rsa Username:docker}
	I1217 01:36:50.090001 1408373 ssh_runner.go:195] Run: systemctl --version
	I1217 01:36:50.097082 1408373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:36:50.101918 1408373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:36:50.102008 1408373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:36:50.112848 1408373 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:36:50.112875 1408373 start.go:496] detecting cgroup driver to use...
	I1217 01:36:50.112908 1408373 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:36:50.112960 1408373 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:36:50.133042 1408373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:36:50.148965 1408373 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:36:50.149103 1408373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:36:50.165291 1408373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:36:50.179495 1408373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:36:50.293356 1408373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:36:50.429121 1408373 docker.go:234] disabling docker service ...
	I1217 01:36:50.429207 1408373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:36:50.444690 1408373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:36:50.458272 1408373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:36:50.567990 1408373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:36:50.680414 1408373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:36:50.693332 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:36:50.707527 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:36:50.717137 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:36:50.726436 1408373 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:36:50.726504 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:36:50.735527 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:36:50.745116 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:36:50.754228 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:36:50.763224 1408373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:36:50.771205 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:36:50.779855 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:36:50.789202 1408373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:36:50.798616 1408373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:36:50.806199 1408373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:36:50.813596 1408373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:36:50.940058 1408373 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:36:51.078450 1408373 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:36:51.078524 1408373 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:36:51.082463 1408373 start.go:564] Will wait 60s for crictl version
	I1217 01:36:51.082533 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:51.086410 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:36:51.117334 1408373 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:36:51.117417 1408373 ssh_runner.go:195] Run: containerd --version
	I1217 01:36:51.141294 1408373 ssh_runner.go:195] Run: containerd --version
	I1217 01:36:51.166081 1408373 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:36:51.169013 1408373 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-916713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:36:51.185782 1408373 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 01:36:51.189461 1408373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:36:51.199603 1408373 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-916713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-916713 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:36:51.199713 1408373 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:36:51.199784 1408373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:36:51.224566 1408373 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:36:51.224645 1408373 ssh_runner.go:195] Run: which lz4
	I1217 01:36:51.228390 1408373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 01:36:51.231989 1408373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 01:36:51.232027 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1217 01:36:52.770943 1408373 containerd.go:563] duration metric: took 1.542619402s to copy over tarball
	I1217 01:36:52.771030 1408373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 01:36:54.827814 1408373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.056758936s)
	I1217 01:36:54.827898 1408373 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1217 01:36:54.827978 1408373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:36:54.855068 1408373 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:36:54.855091 1408373 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 01:36:54.855182 1408373 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:36:54.855234 1408373 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:54.855395 1408373 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 01:36:54.855409 1408373 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:54.855489 1408373 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:54.855503 1408373 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:54.855572 1408373 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:54.855588 1408373 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:54.857512 1408373 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 01:36:54.857699 1408373 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:54.857956 1408373 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:54.858019 1408373 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:54.858114 1408373 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:54.858253 1408373 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:36:54.857517 1408373 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:54.858255 1408373 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.204137 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1217 01:36:55.204211 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:55.226657 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1217 01:36:55.226768 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:55.228387 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1217 01:36:55.228490 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1217 01:36:55.228736 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1217 01:36:55.228809 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.242218 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1217 01:36:55.242340 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:55.276875 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1217 01:36:55.277026 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:55.287411 1408373 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1217 01:36:55.287542 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:55.348424 1408373 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1217 01:36:55.348519 1408373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:55.348616 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.355388 1408373 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1217 01:36:55.355483 1408373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:55.355585 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.355518 1408373 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1217 01:36:55.355808 1408373 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.355876 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.355659 1408373 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1217 01:36:55.355974 1408373 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 01:36:55.356037 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.356340 1408373 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1217 01:36:55.356397 1408373 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:55.356503 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.358045 1408373 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1217 01:36:55.358124 1408373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:55.358196 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.366359 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:55.366438 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:55.366493 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.366550 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:36:55.369636 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:55.369738 1408373 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1217 01:36:55.369773 1408373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:55.369808 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:55.371547 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:55.474332 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:36:55.474446 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:55.474521 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.474610 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:55.475925 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:55.543681 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:36:55.543848 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:55.561101 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:36:55.563783 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:36:55.563994 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:36:55.564101 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:55.564227 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:55.644700 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:36:55.644789 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1217 01:36:55.644862 1408373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:36:55.644927 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1217 01:36:55.644980 1408373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 01:36:55.672927 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:36:55.673084 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1217 01:36:55.673178 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1217 01:36:55.673265 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:36:55.704594 1408373 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1217 01:36:55.704775 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1217 01:36:55.704709 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1217 01:36:55.704917 1408373 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 01:36:55.704995 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1217 01:36:55.729533 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 01:36:55.729631 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1217 01:36:55.756569 1408373 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 01:36:55.756654 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1217 01:36:55.979607 1408373 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:36:55.979666 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1217 01:36:56.150587 1408373 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1217 01:36:56.150800 1408373 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1217 01:36:56.150868 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:36:56.637408 1408373 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1217 01:36:56.637447 1408373 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:36:56.637494 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:56.641188 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:36:56.777741 1408373 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 01:36:56.777847 1408373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:36:56.781459 1408373 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 01:36:56.781494 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1217 01:36:56.855331 1408373 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:36:56.855471 1408373 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:36:57.292211 1408373 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 01:36:57.292296 1408373 cache_images.go:94] duration metric: took 2.437190598s to LoadCachedImages
	W1217 01:36:57.292460 1408373 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	I1217 01:36:57.292487 1408373 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:36:57.292741 1408373 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-916713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-916713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:36:57.292821 1408373 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:36:57.321596 1408373 cni.go:84] Creating CNI manager for ""
	I1217 01:36:57.321628 1408373 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:36:57.321697 1408373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:36:57.321727 1408373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-916713 NodeName:kubernetes-upgrade-916713 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:36:57.321852 1408373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-916713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:36:57.321940 1408373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:36:57.331491 1408373 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:36:57.331568 1408373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:36:57.339435 1408373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1217 01:36:57.352267 1408373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:36:57.365092 1408373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1217 01:36:57.378467 1408373 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:36:57.382401 1408373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:36:57.392642 1408373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:36:57.532450 1408373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:36:57.549974 1408373 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713 for IP: 192.168.76.2
	I1217 01:36:57.550043 1408373 certs.go:195] generating shared ca certs ...
	I1217 01:36:57.550075 1408373 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:36:57.550282 1408373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:36:57.550371 1408373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:36:57.550411 1408373 certs.go:257] generating profile certs ...
	I1217 01:36:57.550547 1408373 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.key
	I1217 01:36:57.550674 1408373 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/apiserver.key.d12396c1
	I1217 01:36:57.550830 1408373 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/proxy-client.key
	I1217 01:36:57.551006 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:36:57.551077 1408373 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:36:57.551124 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:36:57.551178 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:36:57.551254 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:36:57.551328 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:36:57.551419 1408373 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:36:57.552294 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:36:57.573867 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:36:57.594938 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:36:57.615168 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:36:57.634410 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 01:36:57.651989 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:36:57.669859 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:36:57.687794 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:36:57.705465 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:36:57.722847 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:36:57.740263 1408373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:36:57.759347 1408373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:36:57.772261 1408373 ssh_runner.go:195] Run: openssl version
	I1217 01:36:57.780597 1408373 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:36:57.788828 1408373 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:36:57.797296 1408373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:36:57.801246 1408373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:36:57.801339 1408373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:36:57.848546 1408373 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:36:57.855976 1408373 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:36:57.863302 1408373 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:36:57.870728 1408373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:36:57.874449 1408373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:36:57.874515 1408373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:36:57.915690 1408373 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:36:57.923169 1408373 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:36:57.930628 1408373 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:36:57.938278 1408373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:36:57.942103 1408373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:36:57.942208 1408373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:36:57.982895 1408373 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:36:57.990656 1408373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:36:57.994494 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:36:58.036487 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:36:58.079914 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:36:58.124199 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:36:58.166175 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:36:58.207391 1408373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:36:58.250947 1408373 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-916713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-916713 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:36:58.251039 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:36:58.251120 1408373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:36:58.285988 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:36:58.286056 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:36:58.286076 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:36:58.286102 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:36:58.286121 1408373 cri.go:89] found id: ""
	I1217 01:36:58.286187 1408373 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1217 01:36:58.303629 1408373 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:36:58Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1217 01:36:58.303738 1408373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:36:58.311502 1408373 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:36:58.311569 1408373 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:36:58.311636 1408373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:36:58.318577 1408373 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:36:58.319148 1408373 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-916713" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:36:58.319442 1408373 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-916713" cluster setting kubeconfig missing "kubernetes-upgrade-916713" context setting]
	I1217 01:36:58.319923 1408373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:36:58.320569 1408373 kapi.go:59] client config for kubernetes-upgrade-916713: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:36:58.321212 1408373 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:36:58.321256 1408373 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:36:58.321284 1408373 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:36:58.321304 1408373 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:36:58.321327 1408373 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:36:58.321629 1408373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:36:58.334452 1408373 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 01:36:18.980621995 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 01:36:57.373055636 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-916713"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1217 01:36:58.334509 1408373 kubeadm.go:1161] stopping kube-system containers ...
	I1217 01:36:58.334535 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 01:36:58.334604 1408373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:36:58.369326 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:36:58.369387 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:36:58.369407 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:36:58.369425 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:36:58.369443 1408373 cri.go:89] found id: ""
	I1217 01:36:58.369463 1408373 cri.go:252] Stopping containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:36:58.369533 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:36:58.373654 1408373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6
	I1217 01:36:58.424474 1408373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 01:36:58.449004 1408373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:36:58.457766 1408373 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 17 01:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 17 01:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 17 01:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 17 01:36 /etc/kubernetes/scheduler.conf
	
	I1217 01:36:58.457893 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:36:58.466906 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:36:58.475496 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:36:58.484946 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:36:58.485010 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:36:58.493044 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:36:58.501375 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:36:58.501438 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:36:58.508876 1408373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:36:58.516734 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:36:58.568376 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:36:59.770609 1408373 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202159269s)
	I1217 01:36:59.770730 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:36:59.969689 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:37:00.044119 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:37:00.246601 1408373 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:37:00.246775 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:00.746946 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:01.246945 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:01.747478 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:02.247610 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:02.747706 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:03.247594 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:03.747384 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:04.246875 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:04.746948 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:05.247092 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:05.746931 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:06.247229 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:06.747780 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:07.246933 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:07.747372 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:08.246841 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:08.747698 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:09.246870 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:09.747340 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:10.246896 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:10.746888 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:11.247512 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:11.747653 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:12.246866 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:12.746876 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:13.247752 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:13.747546 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:14.246868 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:14.747472 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:15.247816 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:15.747288 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:16.247614 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:16.747779 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:17.247704 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:17.747413 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:18.247680 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:18.747770 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:19.247425 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:19.747612 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:20.247297 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:20.747497 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:21.247499 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:21.746882 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:22.247322 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:22.747657 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:23.247489 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:23.746893 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:24.246865 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:24.747446 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:25.247547 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:25.746882 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:26.247647 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:26.747577 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:27.247267 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:27.746902 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:28.246910 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:28.747336 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:29.246871 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:29.747741 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:30.247696 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:30.747649 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:31.247613 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:31.746891 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:32.247620 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:32.746877 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:33.247389 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:33.747812 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:34.247535 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:34.747746 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:35.246872 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:35.747861 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:36.247770 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:36.747772 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:37.246865 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:37.746869 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:38.247813 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:38.747396 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:39.247668 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:39.747770 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:40.247363 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:40.747292 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:41.246911 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:41.746886 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:42.247228 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:42.747325 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:43.246886 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:43.746901 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:44.247190 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:44.746892 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:45.247159 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:45.746860 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:46.247582 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:46.746921 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:47.247584 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:47.747696 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:48.247341 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:48.746881 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:49.247642 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:49.746888 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:50.247183 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:50.746876 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:51.247480 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:51.746859 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:52.246868 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:52.747683 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:53.246970 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:53.747659 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:54.246869 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:54.746883 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:55.247368 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:55.746868 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:56.247621 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:56.746945 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:57.246855 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:57.747408 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:58.247830 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:58.746873 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:59.246864 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:59.747619 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:00.246842 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:00.246958 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:00.302502 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:00.302525 1408373 cri.go:89] found id: ""
	I1217 01:38:00.302534 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:00.302598 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:00.310473 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:00.310571 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:00.365583 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:00.365605 1408373 cri.go:89] found id: ""
	I1217 01:38:00.365614 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:00.365713 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:00.390411 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:00.390578 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:00.472538 1408373 cri.go:89] found id: ""
	I1217 01:38:00.472569 1408373 logs.go:282] 0 containers: []
	W1217 01:38:00.472579 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:00.472588 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:00.472658 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:00.555157 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:00.555179 1408373 cri.go:89] found id: ""
	I1217 01:38:00.555187 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:00.555251 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:00.569961 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:00.570040 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:00.639940 1408373 cri.go:89] found id: ""
	I1217 01:38:00.639961 1408373 logs.go:282] 0 containers: []
	W1217 01:38:00.639970 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:00.639976 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:00.640036 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:00.688561 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:00.688642 1408373 cri.go:89] found id: ""
	I1217 01:38:00.688680 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:00.688773 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:00.692591 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:00.692659 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:00.745818 1408373 cri.go:89] found id: ""
	I1217 01:38:00.745899 1408373 logs.go:282] 0 containers: []
	W1217 01:38:00.745921 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:00.745939 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:00.746025 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:00.804327 1408373 cri.go:89] found id: ""
	I1217 01:38:00.804404 1408373 logs.go:282] 0 containers: []
	W1217 01:38:00.804438 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:00.804476 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:00.804509 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:00.865890 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:00.865971 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:00.982021 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:00.982053 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:00.982067 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:01.057354 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:01.057431 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:01.126474 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:01.126552 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:01.224598 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:01.224685 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:01.247435 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:01.247510 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:01.312902 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:01.312936 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:01.362052 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:01.362087 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:03.903762 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:03.914688 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:03.914762 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:03.942552 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:03.942573 1408373 cri.go:89] found id: ""
	I1217 01:38:03.942582 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:03.942639 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:03.946533 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:03.946621 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:03.971826 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:03.971849 1408373 cri.go:89] found id: ""
	I1217 01:38:03.971858 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:03.971916 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:03.975897 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:03.976002 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:04.000546 1408373 cri.go:89] found id: ""
	I1217 01:38:04.000572 1408373 logs.go:282] 0 containers: []
	W1217 01:38:04.000581 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:04.000588 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:04.000666 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:04.032156 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:04.032191 1408373 cri.go:89] found id: ""
	I1217 01:38:04.032200 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:04.032258 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:04.036249 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:04.036346 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:04.061557 1408373 cri.go:89] found id: ""
	I1217 01:38:04.061586 1408373 logs.go:282] 0 containers: []
	W1217 01:38:04.061596 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:04.061602 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:04.061720 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:04.088032 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:04.088055 1408373 cri.go:89] found id: ""
	I1217 01:38:04.088064 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:04.088137 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:04.091981 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:04.092054 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:04.120125 1408373 cri.go:89] found id: ""
	I1217 01:38:04.120152 1408373 logs.go:282] 0 containers: []
	W1217 01:38:04.120162 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:04.120169 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:04.120257 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:04.148793 1408373 cri.go:89] found id: ""
	I1217 01:38:04.148828 1408373 logs.go:282] 0 containers: []
	W1217 01:38:04.148838 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:04.148852 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:04.148868 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:04.178391 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:04.178428 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:04.210306 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:04.210336 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:04.225633 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:04.225674 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:04.294929 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:04.294952 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:04.294965 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:04.329300 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:04.329330 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:04.360768 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:04.360796 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:04.427323 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:04.427404 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:04.465620 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:04.466014 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:07.009014 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:07.020306 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:07.020375 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:07.049581 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:07.049602 1408373 cri.go:89] found id: ""
	I1217 01:38:07.049611 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:07.049699 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:07.053384 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:07.053457 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:07.082181 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:07.082205 1408373 cri.go:89] found id: ""
	I1217 01:38:07.082214 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:07.082281 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:07.085911 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:07.085979 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:07.113072 1408373 cri.go:89] found id: ""
	I1217 01:38:07.113095 1408373 logs.go:282] 0 containers: []
	W1217 01:38:07.113111 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:07.113118 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:07.113175 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:07.138290 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:07.138355 1408373 cri.go:89] found id: ""
	I1217 01:38:07.138377 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:07.138458 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:07.142320 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:07.142394 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:07.170992 1408373 cri.go:89] found id: ""
	I1217 01:38:07.171015 1408373 logs.go:282] 0 containers: []
	W1217 01:38:07.171023 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:07.171029 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:07.171087 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:07.200363 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:07.200382 1408373 cri.go:89] found id: ""
	I1217 01:38:07.200391 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:07.200445 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:07.204052 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:07.204122 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:07.228690 1408373 cri.go:89] found id: ""
	I1217 01:38:07.228713 1408373 logs.go:282] 0 containers: []
	W1217 01:38:07.228722 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:07.228728 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:07.228785 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:07.252236 1408373 cri.go:89] found id: ""
	I1217 01:38:07.252264 1408373 logs.go:282] 0 containers: []
	W1217 01:38:07.252273 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:07.252287 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:07.252300 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:07.309228 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:07.309264 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:07.323915 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:07.323942 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:07.354977 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:07.355008 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:07.385471 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:07.385507 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:07.460457 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:07.460478 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:07.460497 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:07.498686 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:07.498723 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:07.535895 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:07.535927 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:07.565088 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:07.565116 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:10.094342 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:10.105936 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:10.106012 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:10.131746 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:10.131768 1408373 cri.go:89] found id: ""
	I1217 01:38:10.131776 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:10.131833 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:10.135658 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:10.135759 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:10.162589 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:10.162618 1408373 cri.go:89] found id: ""
	I1217 01:38:10.162626 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:10.162683 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:10.166523 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:10.166597 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:10.192085 1408373 cri.go:89] found id: ""
	I1217 01:38:10.192174 1408373 logs.go:282] 0 containers: []
	W1217 01:38:10.192198 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:10.192228 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:10.192304 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:10.218524 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:10.218598 1408373 cri.go:89] found id: ""
	I1217 01:38:10.218635 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:10.218724 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:10.222411 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:10.222486 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:10.247329 1408373 cri.go:89] found id: ""
	I1217 01:38:10.247353 1408373 logs.go:282] 0 containers: []
	W1217 01:38:10.247362 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:10.247369 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:10.247429 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:10.273924 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:10.273943 1408373 cri.go:89] found id: ""
	I1217 01:38:10.273952 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:10.274013 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:10.277815 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:10.277887 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:10.302001 1408373 cri.go:89] found id: ""
	I1217 01:38:10.302078 1408373 logs.go:282] 0 containers: []
	W1217 01:38:10.302101 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:10.302120 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:10.302203 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:10.327433 1408373 cri.go:89] found id: ""
	I1217 01:38:10.327456 1408373 logs.go:282] 0 containers: []
	W1217 01:38:10.327465 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:10.327478 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:10.327490 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:10.385616 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:10.385662 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:10.401195 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:10.401226 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:10.474388 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:10.474407 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:10.474420 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:10.511396 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:10.511429 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:10.550345 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:10.550377 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:10.590437 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:10.590487 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:10.623193 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:10.623225 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:10.652215 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:10.652248 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:13.186172 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:13.196243 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:13.196326 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:13.222280 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:13.222303 1408373 cri.go:89] found id: ""
	I1217 01:38:13.222311 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:13.222368 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:13.226181 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:13.226255 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:13.249939 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:13.249963 1408373 cri.go:89] found id: ""
	I1217 01:38:13.249971 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:13.250024 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:13.253448 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:13.253516 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:13.278253 1408373 cri.go:89] found id: ""
	I1217 01:38:13.278274 1408373 logs.go:282] 0 containers: []
	W1217 01:38:13.278284 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:13.278290 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:13.278349 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:13.310757 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:13.310782 1408373 cri.go:89] found id: ""
	I1217 01:38:13.310803 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:13.310860 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:13.314569 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:13.314639 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:13.338527 1408373 cri.go:89] found id: ""
	I1217 01:38:13.338551 1408373 logs.go:282] 0 containers: []
	W1217 01:38:13.338561 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:13.338567 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:13.338623 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:13.363884 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:13.363954 1408373 cri.go:89] found id: ""
	I1217 01:38:13.363970 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:13.364042 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:13.368279 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:13.368346 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:13.399172 1408373 cri.go:89] found id: ""
	I1217 01:38:13.399195 1408373 logs.go:282] 0 containers: []
	W1217 01:38:13.399204 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:13.399210 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:13.399267 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:13.428562 1408373 cri.go:89] found id: ""
	I1217 01:38:13.428636 1408373 logs.go:282] 0 containers: []
	W1217 01:38:13.428660 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:13.428715 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:13.428746 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:13.489850 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:13.489890 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:13.507503 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:13.507531 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:13.578096 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:13.578117 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:13.578130 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:13.615518 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:13.615548 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:13.648797 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:13.648829 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:13.678563 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:13.678593 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:13.707197 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:13.707228 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:13.739331 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:13.739362 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:16.267746 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:16.277810 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:16.277881 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:16.303026 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:16.303055 1408373 cri.go:89] found id: ""
	I1217 01:38:16.303063 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:16.303135 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:16.306978 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:16.307073 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:16.336664 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:16.336687 1408373 cri.go:89] found id: ""
	I1217 01:38:16.336696 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:16.336754 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:16.340740 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:16.340820 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:16.365024 1408373 cri.go:89] found id: ""
	I1217 01:38:16.365048 1408373 logs.go:282] 0 containers: []
	W1217 01:38:16.365056 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:16.365063 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:16.365133 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:16.393208 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:16.393233 1408373 cri.go:89] found id: ""
	I1217 01:38:16.393242 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:16.393298 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:16.398034 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:16.398109 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:16.443936 1408373 cri.go:89] found id: ""
	I1217 01:38:16.443967 1408373 logs.go:282] 0 containers: []
	W1217 01:38:16.443987 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:16.443994 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:16.444052 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:16.473166 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:16.473190 1408373 cri.go:89] found id: ""
	I1217 01:38:16.473199 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:16.473292 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:16.477166 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:16.477235 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:16.506259 1408373 cri.go:89] found id: ""
	I1217 01:38:16.506339 1408373 logs.go:282] 0 containers: []
	W1217 01:38:16.506355 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:16.506363 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:16.506429 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:16.531773 1408373 cri.go:89] found id: ""
	I1217 01:38:16.531837 1408373 logs.go:282] 0 containers: []
	W1217 01:38:16.531852 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:16.531867 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:16.531887 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:16.593310 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:16.593348 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:16.658879 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:16.658899 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:16.658914 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:16.694226 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:16.694257 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:16.729838 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:16.729872 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:16.759162 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:16.759192 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:16.773872 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:16.773899 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:16.814394 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:16.814429 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:16.844134 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:16.844168 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:19.373758 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:19.384433 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:19.384502 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:19.417734 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:19.417756 1408373 cri.go:89] found id: ""
	I1217 01:38:19.417766 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:19.417823 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:19.422114 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:19.422186 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:19.452377 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:19.452399 1408373 cri.go:89] found id: ""
	I1217 01:38:19.452408 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:19.452465 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:19.456350 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:19.456424 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:19.481982 1408373 cri.go:89] found id: ""
	I1217 01:38:19.482004 1408373 logs.go:282] 0 containers: []
	W1217 01:38:19.482013 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:19.482020 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:19.482078 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:19.506465 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:19.506488 1408373 cri.go:89] found id: ""
	I1217 01:38:19.506496 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:19.506577 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:19.510614 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:19.510683 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:19.535338 1408373 cri.go:89] found id: ""
	I1217 01:38:19.535408 1408373 logs.go:282] 0 containers: []
	W1217 01:38:19.535423 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:19.535431 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:19.535497 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:19.560875 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:19.560898 1408373 cri.go:89] found id: ""
	I1217 01:38:19.560907 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:19.560974 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:19.564707 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:19.564789 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:19.589149 1408373 cri.go:89] found id: ""
	I1217 01:38:19.589173 1408373 logs.go:282] 0 containers: []
	W1217 01:38:19.589182 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:19.589189 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:19.589253 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:19.614718 1408373 cri.go:89] found id: ""
	I1217 01:38:19.614793 1408373 logs.go:282] 0 containers: []
	W1217 01:38:19.614809 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:19.614826 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:19.614838 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:19.672649 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:19.672681 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:19.736991 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:19.737024 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:19.737039 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:19.773851 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:19.773881 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:19.807379 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:19.807410 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:19.841250 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:19.841283 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:19.870322 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:19.870353 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:19.898415 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:19.898446 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:19.912836 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:19.912870 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:22.481835 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:22.494181 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:22.494297 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:22.530241 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:22.530275 1408373 cri.go:89] found id: ""
	I1217 01:38:22.530284 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:22.530415 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:22.534427 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:22.534517 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:22.561066 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:22.561087 1408373 cri.go:89] found id: ""
	I1217 01:38:22.561096 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:22.561151 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:22.564821 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:22.564895 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:22.590599 1408373 cri.go:89] found id: ""
	I1217 01:38:22.590623 1408373 logs.go:282] 0 containers: []
	W1217 01:38:22.590631 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:22.590638 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:22.590700 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:22.621052 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:22.621070 1408373 cri.go:89] found id: ""
	I1217 01:38:22.621079 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:22.621137 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:22.625033 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:22.625114 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:22.649907 1408373 cri.go:89] found id: ""
	I1217 01:38:22.649929 1408373 logs.go:282] 0 containers: []
	W1217 01:38:22.649937 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:22.649943 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:22.650000 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:22.675378 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:22.675398 1408373 cri.go:89] found id: ""
	I1217 01:38:22.675407 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:22.675484 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:22.679210 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:22.679284 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:22.703552 1408373 cri.go:89] found id: ""
	I1217 01:38:22.703574 1408373 logs.go:282] 0 containers: []
	W1217 01:38:22.703584 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:22.703591 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:22.703649 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:22.731164 1408373 cri.go:89] found id: ""
	I1217 01:38:22.731185 1408373 logs.go:282] 0 containers: []
	W1217 01:38:22.731194 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:22.731208 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:22.731220 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:22.772013 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:22.772042 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:22.802583 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:22.802621 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:22.832522 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:22.832601 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:22.847394 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:22.847422 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:22.911765 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:22.911786 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:22.911800 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:22.946725 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:22.946759 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:22.984168 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:22.984198 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:23.042949 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:23.042982 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:25.577262 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:25.587436 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:25.587507 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:25.613118 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:25.613142 1408373 cri.go:89] found id: ""
	I1217 01:38:25.613151 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:25.613207 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:25.617494 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:25.617567 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:25.642675 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:25.642701 1408373 cri.go:89] found id: ""
	I1217 01:38:25.642719 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:25.642790 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:25.646406 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:25.646481 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:25.671342 1408373 cri.go:89] found id: ""
	I1217 01:38:25.671371 1408373 logs.go:282] 0 containers: []
	W1217 01:38:25.671380 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:25.671388 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:25.671451 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:25.697488 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:25.697514 1408373 cri.go:89] found id: ""
	I1217 01:38:25.697523 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:25.697583 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:25.701293 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:25.701365 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:25.725627 1408373 cri.go:89] found id: ""
	I1217 01:38:25.725696 1408373 logs.go:282] 0 containers: []
	W1217 01:38:25.725706 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:25.725713 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:25.725776 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:25.755911 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:25.755931 1408373 cri.go:89] found id: ""
	I1217 01:38:25.755940 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:25.756000 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:25.759769 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:25.759848 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:25.784827 1408373 cri.go:89] found id: ""
	I1217 01:38:25.784852 1408373 logs.go:282] 0 containers: []
	W1217 01:38:25.784862 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:25.784868 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:25.784927 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:25.810629 1408373 cri.go:89] found id: ""
	I1217 01:38:25.810652 1408373 logs.go:282] 0 containers: []
	W1217 01:38:25.810661 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:25.810678 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:25.810689 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:25.846898 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:25.846931 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:25.886136 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:25.886170 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:25.915588 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:25.915616 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:25.949636 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:25.949780 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:25.966100 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:25.966128 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:25.996946 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:25.996973 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:26.063028 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:26.063061 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:26.147383 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:26.147454 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:26.147481 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:28.699425 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:28.710640 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:28.710718 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:28.739375 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:28.739395 1408373 cri.go:89] found id: ""
	I1217 01:38:28.739403 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:28.739457 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:28.743129 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:28.743202 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:28.770596 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:28.770617 1408373 cri.go:89] found id: ""
	I1217 01:38:28.770626 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:28.770682 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:28.774440 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:28.774513 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:28.800151 1408373 cri.go:89] found id: ""
	I1217 01:38:28.800174 1408373 logs.go:282] 0 containers: []
	W1217 01:38:28.800184 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:28.800195 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:28.800257 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:28.828984 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:28.829006 1408373 cri.go:89] found id: ""
	I1217 01:38:28.829014 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:28.829071 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:28.832665 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:28.832731 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:28.856830 1408373 cri.go:89] found id: ""
	I1217 01:38:28.856851 1408373 logs.go:282] 0 containers: []
	W1217 01:38:28.856869 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:28.856876 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:28.856936 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:28.882453 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:28.882479 1408373 cri.go:89] found id: ""
	I1217 01:38:28.882487 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:28.882545 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:28.886308 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:28.886384 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:28.911215 1408373 cri.go:89] found id: ""
	I1217 01:38:28.911240 1408373 logs.go:282] 0 containers: []
	W1217 01:38:28.911249 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:28.911255 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:28.911317 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:28.937296 1408373 cri.go:89] found id: ""
	I1217 01:38:28.937317 1408373 logs.go:282] 0 containers: []
	W1217 01:38:28.937328 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:28.937341 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:28.937352 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:28.996877 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:28.996912 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:29.035847 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:29.035882 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:29.069958 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:29.069985 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:29.084651 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:29.084678 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:29.165380 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:29.165402 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:29.165415 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:29.203218 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:29.203248 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:29.239324 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:29.239356 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:29.268770 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:29.268806 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:31.811948 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:31.822343 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:31.822425 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:31.847292 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:31.847313 1408373 cri.go:89] found id: ""
	I1217 01:38:31.847322 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:31.847377 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:31.851214 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:31.851286 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:31.879422 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:31.879446 1408373 cri.go:89] found id: ""
	I1217 01:38:31.879454 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:31.879514 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:31.883311 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:31.883390 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:31.908325 1408373 cri.go:89] found id: ""
	I1217 01:38:31.908348 1408373 logs.go:282] 0 containers: []
	W1217 01:38:31.908357 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:31.908363 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:31.908422 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:31.936355 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:31.936375 1408373 cri.go:89] found id: ""
	I1217 01:38:31.936384 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:31.936437 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:31.940267 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:31.940342 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:31.971068 1408373 cri.go:89] found id: ""
	I1217 01:38:31.971090 1408373 logs.go:282] 0 containers: []
	W1217 01:38:31.971098 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:31.971111 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:31.971173 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:31.996614 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:31.996693 1408373 cri.go:89] found id: ""
	I1217 01:38:31.996715 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:31.996799 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:32.000637 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:32.000750 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:32.027419 1408373 cri.go:89] found id: ""
	I1217 01:38:32.027445 1408373 logs.go:282] 0 containers: []
	W1217 01:38:32.027464 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:32.027471 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:32.027533 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:32.056844 1408373 cri.go:89] found id: ""
	I1217 01:38:32.056869 1408373 logs.go:282] 0 containers: []
	W1217 01:38:32.056878 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:32.056904 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:32.056920 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:32.085446 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:32.085481 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:32.148671 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:32.149824 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:32.188602 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:32.188676 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:32.223005 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:32.223035 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:32.265122 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:32.265148 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:32.279716 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:32.279743 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:32.347622 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:32.347686 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:32.347717 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:32.379292 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:32.379322 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:34.914602 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:34.924797 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:34.924867 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:34.950394 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:34.950417 1408373 cri.go:89] found id: ""
	I1217 01:38:34.950426 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:34.950481 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:34.954189 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:34.954263 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:34.986080 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:34.986154 1408373 cri.go:89] found id: ""
	I1217 01:38:34.986178 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:34.986262 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:34.990211 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:34.990304 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:35.033419 1408373 cri.go:89] found id: ""
	I1217 01:38:35.033448 1408373 logs.go:282] 0 containers: []
	W1217 01:38:35.033457 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:35.033464 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:35.033525 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:35.059344 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:35.059410 1408373 cri.go:89] found id: ""
	I1217 01:38:35.059430 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:35.059492 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:35.063491 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:35.063586 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:35.089307 1408373 cri.go:89] found id: ""
	I1217 01:38:35.089344 1408373 logs.go:282] 0 containers: []
	W1217 01:38:35.089353 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:35.089360 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:35.089432 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:35.115834 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:35.115857 1408373 cri.go:89] found id: ""
	I1217 01:38:35.115875 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:35.115938 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:35.121269 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:35.121372 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:35.156358 1408373 cri.go:89] found id: ""
	I1217 01:38:35.156391 1408373 logs.go:282] 0 containers: []
	W1217 01:38:35.156400 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:35.156426 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:35.156516 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:35.191753 1408373 cri.go:89] found id: ""
	I1217 01:38:35.191779 1408373 logs.go:282] 0 containers: []
	W1217 01:38:35.191789 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:35.191803 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:35.191814 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:35.207905 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:35.207934 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:35.241776 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:35.241809 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:35.270197 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:35.270230 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:35.330017 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:35.330052 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:35.396467 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:35.396499 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:35.396512 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:35.434802 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:35.434835 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:35.469568 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:35.469610 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:35.500315 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:35.500345 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:38.030992 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:38.042144 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:38.042228 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:38.075369 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:38.075390 1408373 cri.go:89] found id: ""
	I1217 01:38:38.075398 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:38.075465 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:38.080044 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:38.080120 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:38.105756 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:38.105777 1408373 cri.go:89] found id: ""
	I1217 01:38:38.105786 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:38.105842 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:38.109755 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:38.109827 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:38.143055 1408373 cri.go:89] found id: ""
	I1217 01:38:38.143075 1408373 logs.go:282] 0 containers: []
	W1217 01:38:38.143084 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:38.143090 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:38.143148 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:38.172181 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:38.172201 1408373 cri.go:89] found id: ""
	I1217 01:38:38.172209 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:38.172273 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:38.176400 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:38.176473 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:38.210337 1408373 cri.go:89] found id: ""
	I1217 01:38:38.210416 1408373 logs.go:282] 0 containers: []
	W1217 01:38:38.210439 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:38.210457 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:38.210550 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:38.234866 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:38.234928 1408373 cri.go:89] found id: ""
	I1217 01:38:38.234950 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:38.235031 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:38.238500 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:38.238571 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:38.266233 1408373 cri.go:89] found id: ""
	I1217 01:38:38.266255 1408373 logs.go:282] 0 containers: []
	W1217 01:38:38.266263 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:38.266270 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:38.266328 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:38.291152 1408373 cri.go:89] found id: ""
	I1217 01:38:38.291231 1408373 logs.go:282] 0 containers: []
	W1217 01:38:38.291254 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:38.291274 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:38.291299 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:38.348004 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:38.348038 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:38.415207 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:38.415230 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:38.415244 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:38.450069 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:38.450098 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:38.487992 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:38.488025 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:38.517776 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:38.517810 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:38.546627 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:38.546655 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:38.561380 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:38.561409 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:38.594878 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:38.594912 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:41.127259 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:41.141248 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:41.141318 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:41.210353 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:41.210372 1408373 cri.go:89] found id: ""
	I1217 01:38:41.210380 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:41.210440 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:41.223046 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:41.223124 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:41.254690 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:41.254717 1408373 cri.go:89] found id: ""
	I1217 01:38:41.254726 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:41.254786 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:41.258798 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:41.258881 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:41.284212 1408373 cri.go:89] found id: ""
	I1217 01:38:41.284243 1408373 logs.go:282] 0 containers: []
	W1217 01:38:41.284252 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:41.284259 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:41.284321 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:41.310366 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:41.310391 1408373 cri.go:89] found id: ""
	I1217 01:38:41.310399 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:41.310456 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:41.314301 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:41.314445 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:41.342699 1408373 cri.go:89] found id: ""
	I1217 01:38:41.342724 1408373 logs.go:282] 0 containers: []
	W1217 01:38:41.342733 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:41.342740 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:41.342813 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:41.367844 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:41.367868 1408373 cri.go:89] found id: ""
	I1217 01:38:41.367876 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:41.367933 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:41.371713 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:41.371785 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:41.397129 1408373 cri.go:89] found id: ""
	I1217 01:38:41.397153 1408373 logs.go:282] 0 containers: []
	W1217 01:38:41.397162 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:41.397169 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:41.397225 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:41.422219 1408373 cri.go:89] found id: ""
	I1217 01:38:41.422298 1408373 logs.go:282] 0 containers: []
	W1217 01:38:41.422321 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:41.422357 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:41.422384 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:41.480145 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:41.480179 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:41.516092 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:41.516124 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:41.545232 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:41.545260 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:41.574570 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:41.574600 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:41.589458 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:41.589500 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:41.654309 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:41.654332 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:41.654345 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:41.692931 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:41.692962 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:41.732288 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:41.732326 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:44.262044 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:44.272384 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:44.272460 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:44.302120 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:44.302140 1408373 cri.go:89] found id: ""
	I1217 01:38:44.302149 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:44.302207 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:44.306038 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:44.306164 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:44.331961 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:44.331983 1408373 cri.go:89] found id: ""
	I1217 01:38:44.331991 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:44.332048 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:44.335956 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:44.336035 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:44.361738 1408373 cri.go:89] found id: ""
	I1217 01:38:44.361762 1408373 logs.go:282] 0 containers: []
	W1217 01:38:44.361771 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:44.361778 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:44.361839 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:44.391113 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:44.391191 1408373 cri.go:89] found id: ""
	I1217 01:38:44.391209 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:44.391285 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:44.395097 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:44.395168 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:44.419598 1408373 cri.go:89] found id: ""
	I1217 01:38:44.419624 1408373 logs.go:282] 0 containers: []
	W1217 01:38:44.419633 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:44.419639 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:44.419704 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:44.445927 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:44.445948 1408373 cri.go:89] found id: ""
	I1217 01:38:44.445957 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:44.446014 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:44.449798 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:44.449868 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:44.475135 1408373 cri.go:89] found id: ""
	I1217 01:38:44.475200 1408373 logs.go:282] 0 containers: []
	W1217 01:38:44.475217 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:44.475225 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:44.475287 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:44.505146 1408373 cri.go:89] found id: ""
	I1217 01:38:44.505170 1408373 logs.go:282] 0 containers: []
	W1217 01:38:44.505179 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:44.505193 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:44.505206 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:44.520155 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:44.520235 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:44.559796 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:44.559831 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:44.591644 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:44.591673 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:44.625962 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:44.625993 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:44.659573 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:44.659607 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:44.688756 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:44.688835 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:44.730073 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:44.730100 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:44.789336 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:44.789372 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:44.858750 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:47.360441 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:47.371238 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:47.371316 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:47.396845 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:47.396871 1408373 cri.go:89] found id: ""
	I1217 01:38:47.396879 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:47.396955 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:47.400878 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:47.401001 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:47.426128 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:47.426151 1408373 cri.go:89] found id: ""
	I1217 01:38:47.426160 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:47.426253 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:47.429975 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:47.430067 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:47.454700 1408373 cri.go:89] found id: ""
	I1217 01:38:47.454778 1408373 logs.go:282] 0 containers: []
	W1217 01:38:47.454794 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:47.454802 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:47.454863 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:47.480330 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:47.480362 1408373 cri.go:89] found id: ""
	I1217 01:38:47.480370 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:47.480435 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:47.484567 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:47.484643 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:47.521934 1408373 cri.go:89] found id: ""
	I1217 01:38:47.521959 1408373 logs.go:282] 0 containers: []
	W1217 01:38:47.521969 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:47.521975 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:47.522037 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:47.547870 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:47.547893 1408373 cri.go:89] found id: ""
	I1217 01:38:47.547901 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:47.547957 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:47.551759 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:47.551827 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:47.579494 1408373 cri.go:89] found id: ""
	I1217 01:38:47.579518 1408373 logs.go:282] 0 containers: []
	W1217 01:38:47.579526 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:47.579533 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:47.579592 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:47.604069 1408373 cri.go:89] found id: ""
	I1217 01:38:47.604135 1408373 logs.go:282] 0 containers: []
	W1217 01:38:47.604151 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:47.604167 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:47.604180 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:47.636471 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:47.636503 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:47.665084 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:47.665110 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:47.722283 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:47.722318 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:47.759797 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:47.759828 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:47.797411 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:47.797439 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:47.825842 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:47.825879 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:47.840752 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:47.840781 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:47.934919 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:47.934941 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:47.934954 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:50.471732 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:50.482115 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:50.482189 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:50.508648 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:50.508670 1408373 cri.go:89] found id: ""
	I1217 01:38:50.508679 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:50.508734 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:50.512627 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:50.512710 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:50.542170 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:50.542193 1408373 cri.go:89] found id: ""
	I1217 01:38:50.542202 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:50.542262 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:50.546056 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:50.546132 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:50.570413 1408373 cri.go:89] found id: ""
	I1217 01:38:50.570437 1408373 logs.go:282] 0 containers: []
	W1217 01:38:50.570446 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:50.570452 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:50.570526 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:50.599646 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:50.599667 1408373 cri.go:89] found id: ""
	I1217 01:38:50.599676 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:50.599754 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:50.603430 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:50.603501 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:50.632928 1408373 cri.go:89] found id: ""
	I1217 01:38:50.632954 1408373 logs.go:282] 0 containers: []
	W1217 01:38:50.632962 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:50.632968 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:50.633033 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:50.659071 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:50.659094 1408373 cri.go:89] found id: ""
	I1217 01:38:50.659103 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:50.659178 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:50.663043 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:50.663141 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:50.688060 1408373 cri.go:89] found id: ""
	I1217 01:38:50.688083 1408373 logs.go:282] 0 containers: []
	W1217 01:38:50.688092 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:50.688098 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:50.688163 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:50.713006 1408373 cri.go:89] found id: ""
	I1217 01:38:50.713034 1408373 logs.go:282] 0 containers: []
	W1217 01:38:50.713042 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:50.713060 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:50.713072 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:50.744704 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:50.744735 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:50.774297 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:50.774330 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:50.789884 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:50.789915 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:50.832064 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:50.832105 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:50.859199 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:50.859226 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:50.925369 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:50.925405 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:50.994995 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:50.995015 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:50.995027 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:51.032282 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:51.032318 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:53.565464 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:53.576017 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:53.576094 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:53.602656 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:53.602679 1408373 cri.go:89] found id: ""
	I1217 01:38:53.602687 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:53.602743 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:53.606605 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:53.606690 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:53.632889 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:53.632910 1408373 cri.go:89] found id: ""
	I1217 01:38:53.632919 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:53.632976 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:53.636812 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:53.636894 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:53.662118 1408373 cri.go:89] found id: ""
	I1217 01:38:53.662143 1408373 logs.go:282] 0 containers: []
	W1217 01:38:53.662154 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:53.662160 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:53.662221 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:53.687142 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:53.687165 1408373 cri.go:89] found id: ""
	I1217 01:38:53.687173 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:53.687256 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:53.691197 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:53.691291 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:53.716289 1408373 cri.go:89] found id: ""
	I1217 01:38:53.716326 1408373 logs.go:282] 0 containers: []
	W1217 01:38:53.716336 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:53.716344 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:53.716425 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:53.749349 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:53.749372 1408373 cri.go:89] found id: ""
	I1217 01:38:53.749380 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:53.749445 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:53.753307 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:53.753399 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:53.778054 1408373 cri.go:89] found id: ""
	I1217 01:38:53.778083 1408373 logs.go:282] 0 containers: []
	W1217 01:38:53.778093 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:53.778099 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:53.778164 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:53.803915 1408373 cri.go:89] found id: ""
	I1217 01:38:53.803940 1408373 logs.go:282] 0 containers: []
	W1217 01:38:53.803950 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:53.803964 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:53.803977 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:53.866300 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:53.866337 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:53.883854 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:53.883883 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:53.965576 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:53.965668 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:53.965704 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:54.001628 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:54.001725 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:54.044213 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:54.044245 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:54.093440 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:54.093474 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:54.131071 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:54.131103 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:54.161693 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:54.161730 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:56.693961 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:56.706724 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:56.706803 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:56.731538 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:56.731560 1408373 cri.go:89] found id: ""
	I1217 01:38:56.731569 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:56.731649 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:56.735473 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:56.735557 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:56.760872 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:56.760938 1408373 cri.go:89] found id: ""
	I1217 01:38:56.760962 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:56.761049 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:56.764922 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:56.764994 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:56.789898 1408373 cri.go:89] found id: ""
	I1217 01:38:56.789938 1408373 logs.go:282] 0 containers: []
	W1217 01:38:56.789948 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:56.789969 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:56.790056 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:56.815576 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:56.815599 1408373 cri.go:89] found id: ""
	I1217 01:38:56.815608 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:56.815667 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:56.819616 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:56.819691 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:56.845531 1408373 cri.go:89] found id: ""
	I1217 01:38:56.845558 1408373 logs.go:282] 0 containers: []
	W1217 01:38:56.845567 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:56.845573 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:56.845712 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:38:56.875321 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:56.875341 1408373 cri.go:89] found id: ""
	I1217 01:38:56.875351 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:38:56.875409 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:56.880406 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:38:56.880479 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:38:56.913412 1408373 cri.go:89] found id: ""
	I1217 01:38:56.913435 1408373 logs.go:282] 0 containers: []
	W1217 01:38:56.913443 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:38:56.913449 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:38:56.913508 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:38:56.945160 1408373 cri.go:89] found id: ""
	I1217 01:38:56.945186 1408373 logs.go:282] 0 containers: []
	W1217 01:38:56.945195 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:38:56.945210 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:38:56.945222 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:56.981978 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:38:56.982009 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:38:57.014915 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:38:57.014960 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:57.054820 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:38:57.054848 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:38:57.085436 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:38:57.085470 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:38:57.112829 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:38:57.112857 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:38:57.172883 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:38:57.172919 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:38:57.190551 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:38:57.190578 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:38:57.258363 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:38:57.258388 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:38:57.258403 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:59.796880 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:38:59.806887 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:38:59.806958 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:38:59.836205 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:38:59.836278 1408373 cri.go:89] found id: ""
	I1217 01:38:59.836313 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:38:59.836406 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:59.840234 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:38:59.840307 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:38:59.865953 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:38:59.865973 1408373 cri.go:89] found id: ""
	I1217 01:38:59.865982 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:38:59.866044 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:59.873182 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:38:59.873313 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:38:59.910889 1408373 cri.go:89] found id: ""
	I1217 01:38:59.910910 1408373 logs.go:282] 0 containers: []
	W1217 01:38:59.910918 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:38:59.910926 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:38:59.910985 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:38:59.943478 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:38:59.943498 1408373 cri.go:89] found id: ""
	I1217 01:38:59.943506 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:38:59.943564 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:38:59.949403 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:38:59.949477 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:38:59.976594 1408373 cri.go:89] found id: ""
	I1217 01:38:59.976619 1408373 logs.go:282] 0 containers: []
	W1217 01:38:59.976629 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:38:59.976635 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:38:59.976694 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:00.002209 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:00.002232 1408373 cri.go:89] found id: ""
	I1217 01:39:00.002241 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:00.002302 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:00.057311 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:00.057448 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:00.157948 1408373 cri.go:89] found id: ""
	I1217 01:39:00.158039 1408373 logs.go:282] 0 containers: []
	W1217 01:39:00.158065 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:00.158089 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:00.158219 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:00.264238 1408373 cri.go:89] found id: ""
	I1217 01:39:00.264328 1408373 logs.go:282] 0 containers: []
	W1217 01:39:00.264355 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:00.264401 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:00.264434 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:00.355015 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:00.355095 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:00.355127 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:00.400358 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:00.400397 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:00.440283 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:00.440318 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:00.472302 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:00.472341 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:00.531993 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:00.532029 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:00.547422 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:00.547447 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:00.595585 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:00.595661 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:00.640508 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:00.640582 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:03.192340 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:03.202659 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:03.202732 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:03.236036 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:03.236060 1408373 cri.go:89] found id: ""
	I1217 01:39:03.236069 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:03.236126 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:03.240099 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:03.240173 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:03.266225 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:03.266246 1408373 cri.go:89] found id: ""
	I1217 01:39:03.266255 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:03.266310 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:03.270056 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:03.270125 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:03.295490 1408373 cri.go:89] found id: ""
	I1217 01:39:03.295514 1408373 logs.go:282] 0 containers: []
	W1217 01:39:03.295522 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:03.295528 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:03.295587 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:03.320080 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:03.320100 1408373 cri.go:89] found id: ""
	I1217 01:39:03.320108 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:03.320164 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:03.324025 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:03.324096 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:03.349142 1408373 cri.go:89] found id: ""
	I1217 01:39:03.349164 1408373 logs.go:282] 0 containers: []
	W1217 01:39:03.349171 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:03.349177 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:03.349241 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:03.373824 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:03.373845 1408373 cri.go:89] found id: ""
	I1217 01:39:03.373853 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:03.373909 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:03.377942 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:03.378066 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:03.403105 1408373 cri.go:89] found id: ""
	I1217 01:39:03.403128 1408373 logs.go:282] 0 containers: []
	W1217 01:39:03.403137 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:03.403143 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:03.403230 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:03.426994 1408373 cri.go:89] found id: ""
	I1217 01:39:03.427019 1408373 logs.go:282] 0 containers: []
	W1217 01:39:03.427028 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:03.427041 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:03.427053 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:03.486079 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:03.486120 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:03.501083 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:03.501110 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:03.535944 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:03.535973 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:03.569657 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:03.569694 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:03.606048 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:03.606079 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:03.694218 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:03.694240 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:03.694254 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:03.724160 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:03.724189 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:03.752970 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:03.753006 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:06.281810 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:06.292691 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:06.292770 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:06.318834 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:06.318855 1408373 cri.go:89] found id: ""
	I1217 01:39:06.318868 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:06.318925 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:06.322769 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:06.322843 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:06.348063 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:06.348087 1408373 cri.go:89] found id: ""
	I1217 01:39:06.348096 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:06.348153 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:06.351970 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:06.352042 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:06.380319 1408373 cri.go:89] found id: ""
	I1217 01:39:06.380343 1408373 logs.go:282] 0 containers: []
	W1217 01:39:06.380354 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:06.380361 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:06.380419 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:06.405714 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:06.405738 1408373 cri.go:89] found id: ""
	I1217 01:39:06.405747 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:06.405814 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:06.409870 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:06.409948 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:06.435817 1408373 cri.go:89] found id: ""
	I1217 01:39:06.435847 1408373 logs.go:282] 0 containers: []
	W1217 01:39:06.435856 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:06.435863 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:06.435930 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:06.461481 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:06.461503 1408373 cri.go:89] found id: ""
	I1217 01:39:06.461511 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:06.461565 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:06.465406 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:06.465479 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:06.490918 1408373 cri.go:89] found id: ""
	I1217 01:39:06.490995 1408373 logs.go:282] 0 containers: []
	W1217 01:39:06.491011 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:06.491018 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:06.491080 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:06.517034 1408373 cri.go:89] found id: ""
	I1217 01:39:06.517060 1408373 logs.go:282] 0 containers: []
	W1217 01:39:06.517070 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:06.517085 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:06.517104 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:06.549302 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:06.549335 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:06.583670 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:06.583702 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:06.612639 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:06.612674 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:06.675810 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:06.675845 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:06.692901 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:06.692927 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:06.765859 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:06.765882 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:06.765896 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:06.811940 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:06.811970 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:06.845746 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:06.845777 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:09.377768 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:09.388283 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:09.388355 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:09.420339 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:09.420360 1408373 cri.go:89] found id: ""
	I1217 01:39:09.420369 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:09.420425 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:09.425682 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:09.425760 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:09.457061 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:09.457081 1408373 cri.go:89] found id: ""
	I1217 01:39:09.457089 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:09.457147 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:09.461586 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:09.461708 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:09.488313 1408373 cri.go:89] found id: ""
	I1217 01:39:09.488334 1408373 logs.go:282] 0 containers: []
	W1217 01:39:09.488342 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:09.488349 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:09.488407 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:09.515295 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:09.515324 1408373 cri.go:89] found id: ""
	I1217 01:39:09.515334 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:09.515396 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:09.519400 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:09.519472 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:09.543976 1408373 cri.go:89] found id: ""
	I1217 01:39:09.544000 1408373 logs.go:282] 0 containers: []
	W1217 01:39:09.544009 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:09.544015 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:09.544073 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:09.569088 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:09.569112 1408373 cri.go:89] found id: ""
	I1217 01:39:09.569121 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:09.569177 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:09.573006 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:09.573080 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:09.597882 1408373 cri.go:89] found id: ""
	I1217 01:39:09.597906 1408373 logs.go:282] 0 containers: []
	W1217 01:39:09.597915 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:09.597921 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:09.597982 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:09.627577 1408373 cri.go:89] found id: ""
	I1217 01:39:09.627602 1408373 logs.go:282] 0 containers: []
	W1217 01:39:09.627610 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:09.627623 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:09.627634 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:09.660891 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:09.660926 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:09.677495 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:09.677523 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:09.740565 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:09.740587 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:09.740602 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:09.776885 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:09.776917 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:09.808924 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:09.809007 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:09.866341 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:09.866375 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:09.900044 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:09.900077 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:09.933744 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:09.933777 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:12.473789 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:12.485384 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:12.485457 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:12.518049 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:12.518075 1408373 cri.go:89] found id: ""
	I1217 01:39:12.518083 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:12.518139 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:12.523568 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:12.523643 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:12.561085 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:12.561110 1408373 cri.go:89] found id: ""
	I1217 01:39:12.561118 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:12.561178 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:12.565715 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:12.565792 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:12.604477 1408373 cri.go:89] found id: ""
	I1217 01:39:12.604504 1408373 logs.go:282] 0 containers: []
	W1217 01:39:12.604514 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:12.604520 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:12.604635 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:12.666177 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:12.666208 1408373 cri.go:89] found id: ""
	I1217 01:39:12.666217 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:12.666309 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:12.672423 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:12.672523 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:12.745957 1408373 cri.go:89] found id: ""
	I1217 01:39:12.745990 1408373 logs.go:282] 0 containers: []
	W1217 01:39:12.746017 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:12.746030 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:12.746109 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:12.775836 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:12.775870 1408373 cri.go:89] found id: ""
	I1217 01:39:12.775879 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:12.775973 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:12.780217 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:12.780334 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:12.815695 1408373 cri.go:89] found id: ""
	I1217 01:39:12.815729 1408373 logs.go:282] 0 containers: []
	W1217 01:39:12.815738 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:12.815744 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:12.815838 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:12.851521 1408373 cri.go:89] found id: ""
	I1217 01:39:12.851560 1408373 logs.go:282] 0 containers: []
	W1217 01:39:12.851569 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:12.851605 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:12.851638 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:12.916311 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:12.916353 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:12.937091 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:12.937120 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:12.998638 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:12.998754 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:13.040394 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:13.040486 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:13.087446 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:13.087572 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:13.124095 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:13.124126 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:13.155998 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:13.156036 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:13.226542 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:13.226574 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:13.226587 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:15.772335 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:15.783812 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:15.783892 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:15.821145 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:15.821176 1408373 cri.go:89] found id: ""
	I1217 01:39:15.821185 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:15.821252 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:15.826108 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:15.826208 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:15.863768 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:15.863793 1408373 cri.go:89] found id: ""
	I1217 01:39:15.863802 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:15.863868 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:15.869731 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:15.869818 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:15.910247 1408373 cri.go:89] found id: ""
	I1217 01:39:15.910291 1408373 logs.go:282] 0 containers: []
	W1217 01:39:15.910300 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:15.910307 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:15.910378 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:15.945177 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:15.945202 1408373 cri.go:89] found id: ""
	I1217 01:39:15.945211 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:15.945278 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:15.950082 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:15.950180 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:15.991575 1408373 cri.go:89] found id: ""
	I1217 01:39:15.991621 1408373 logs.go:282] 0 containers: []
	W1217 01:39:15.991630 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:15.991637 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:15.991704 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:16.036339 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:16.036364 1408373 cri.go:89] found id: ""
	I1217 01:39:16.036374 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:16.036443 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:16.041325 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:16.041415 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:16.085905 1408373 cri.go:89] found id: ""
	I1217 01:39:16.085932 1408373 logs.go:282] 0 containers: []
	W1217 01:39:16.085942 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:16.085950 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:16.086020 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:16.119959 1408373 cri.go:89] found id: ""
	I1217 01:39:16.119980 1408373 logs.go:282] 0 containers: []
	W1217 01:39:16.119989 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:16.120006 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:16.120017 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:16.172799 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:16.172832 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:16.216681 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:16.216712 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:16.256710 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:16.256746 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:16.297422 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:16.297454 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:16.318912 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:16.318941 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:16.384767 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:16.384802 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:16.471043 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:16.471079 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:16.535599 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:16.535636 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:16.631306 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:19.131536 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:19.142837 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:19.142910 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:19.172230 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:19.172252 1408373 cri.go:89] found id: ""
	I1217 01:39:19.172260 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:19.172315 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:19.176053 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:19.176129 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:19.201716 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:19.201738 1408373 cri.go:89] found id: ""
	I1217 01:39:19.201746 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:19.201801 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:19.205536 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:19.205609 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:19.232043 1408373 cri.go:89] found id: ""
	I1217 01:39:19.232064 1408373 logs.go:282] 0 containers: []
	W1217 01:39:19.232073 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:19.232079 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:19.232136 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:19.277284 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:19.277305 1408373 cri.go:89] found id: ""
	I1217 01:39:19.277314 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:19.277369 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:19.281542 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:19.281618 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:19.320371 1408373 cri.go:89] found id: ""
	I1217 01:39:19.320401 1408373 logs.go:282] 0 containers: []
	W1217 01:39:19.320410 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:19.320416 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:19.320476 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:19.354305 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:19.354324 1408373 cri.go:89] found id: ""
	I1217 01:39:19.354332 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:19.354389 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:19.358690 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:19.358757 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:19.393028 1408373 cri.go:89] found id: ""
	I1217 01:39:19.393052 1408373 logs.go:282] 0 containers: []
	W1217 01:39:19.393060 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:19.393067 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:19.393126 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:19.448034 1408373 cri.go:89] found id: ""
	I1217 01:39:19.448056 1408373 logs.go:282] 0 containers: []
	W1217 01:39:19.448064 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:19.448078 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:19.448089 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:19.473177 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:19.473206 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:19.594548 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:19.594566 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:19.594578 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:19.654005 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:19.654036 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:19.706659 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:19.706691 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:19.742761 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:19.742796 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:19.781205 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:19.781229 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:19.847386 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:19.847413 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:19.892960 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:19.892988 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:22.451541 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:22.462021 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:22.462106 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:22.487538 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:22.487558 1408373 cri.go:89] found id: ""
	I1217 01:39:22.487566 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:22.487622 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:22.491401 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:22.491474 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:22.521548 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:22.521572 1408373 cri.go:89] found id: ""
	I1217 01:39:22.521581 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:22.521666 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:22.525416 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:22.525489 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:22.550278 1408373 cri.go:89] found id: ""
	I1217 01:39:22.550301 1408373 logs.go:282] 0 containers: []
	W1217 01:39:22.550309 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:22.550316 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:22.550374 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:22.580500 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:22.580574 1408373 cri.go:89] found id: ""
	I1217 01:39:22.580597 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:22.580673 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:22.584526 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:22.584600 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:22.611231 1408373 cri.go:89] found id: ""
	I1217 01:39:22.611254 1408373 logs.go:282] 0 containers: []
	W1217 01:39:22.611263 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:22.611269 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:22.611334 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:22.643390 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:22.643413 1408373 cri.go:89] found id: ""
	I1217 01:39:22.643422 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:22.643476 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:22.649046 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:22.649140 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:22.686474 1408373 cri.go:89] found id: ""
	I1217 01:39:22.686499 1408373 logs.go:282] 0 containers: []
	W1217 01:39:22.686509 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:22.686516 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:22.686576 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:22.711673 1408373 cri.go:89] found id: ""
	I1217 01:39:22.711754 1408373 logs.go:282] 0 containers: []
	W1217 01:39:22.711778 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:22.711800 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:22.711827 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:22.726739 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:22.726776 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:22.757678 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:22.757714 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:22.786381 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:22.786409 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:22.844823 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:22.844855 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:22.931896 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:22.931965 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:22.931992 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:22.991400 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:22.991475 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:23.037880 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:23.037921 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:23.086517 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:23.086588 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:25.620810 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:25.633741 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:25.633825 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:25.661200 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:25.661218 1408373 cri.go:89] found id: ""
	I1217 01:39:25.661227 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:25.661286 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:25.665213 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:25.665284 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:25.691378 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:25.691398 1408373 cri.go:89] found id: ""
	I1217 01:39:25.691407 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:25.691464 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:25.695999 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:25.696127 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:25.724135 1408373 cri.go:89] found id: ""
	I1217 01:39:25.724165 1408373 logs.go:282] 0 containers: []
	W1217 01:39:25.724174 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:25.724181 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:25.724250 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:25.753293 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:25.753317 1408373 cri.go:89] found id: ""
	I1217 01:39:25.753326 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:25.753412 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:25.757212 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:25.757296 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:25.783731 1408373 cri.go:89] found id: ""
	I1217 01:39:25.783804 1408373 logs.go:282] 0 containers: []
	W1217 01:39:25.783819 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:25.783827 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:25.783888 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:25.808818 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:25.808838 1408373 cri.go:89] found id: ""
	I1217 01:39:25.808847 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:25.808902 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:25.812497 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:25.812617 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:25.841903 1408373 cri.go:89] found id: ""
	I1217 01:39:25.841927 1408373 logs.go:282] 0 containers: []
	W1217 01:39:25.841936 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:25.841942 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:25.842000 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:25.866768 1408373 cri.go:89] found id: ""
	I1217 01:39:25.866789 1408373 logs.go:282] 0 containers: []
	W1217 01:39:25.866798 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:25.866814 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:25.866826 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:25.935240 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:25.935261 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:25.935274 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:25.970018 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:25.970091 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:26.005996 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:26.006030 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:26.046096 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:26.046129 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:26.105715 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:26.105749 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:26.140228 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:26.140259 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:26.169784 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:26.169819 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:26.201492 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:26.201520 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:28.716589 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:28.726699 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:28.726770 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:28.753123 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:28.753149 1408373 cri.go:89] found id: ""
	I1217 01:39:28.753157 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:28.753216 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:28.757052 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:28.757126 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:28.784183 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:28.784206 1408373 cri.go:89] found id: ""
	I1217 01:39:28.784214 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:28.784269 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:28.788020 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:28.788139 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:28.812879 1408373 cri.go:89] found id: ""
	I1217 01:39:28.812901 1408373 logs.go:282] 0 containers: []
	W1217 01:39:28.812920 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:28.812928 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:28.812994 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:28.837387 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:28.837408 1408373 cri.go:89] found id: ""
	I1217 01:39:28.837416 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:28.837472 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:28.841314 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:28.841386 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:28.867068 1408373 cri.go:89] found id: ""
	I1217 01:39:28.867090 1408373 logs.go:282] 0 containers: []
	W1217 01:39:28.867098 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:28.867104 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:28.867162 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:28.897080 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:28.897104 1408373 cri.go:89] found id: ""
	I1217 01:39:28.897112 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:28.897168 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:28.901016 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:28.901095 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:28.928251 1408373 cri.go:89] found id: ""
	I1217 01:39:28.928276 1408373 logs.go:282] 0 containers: []
	W1217 01:39:28.928285 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:28.928292 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:28.928349 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:28.957070 1408373 cri.go:89] found id: ""
	I1217 01:39:28.957092 1408373 logs.go:282] 0 containers: []
	W1217 01:39:28.957100 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:28.957120 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:28.957132 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:29.022565 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:29.022587 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:29.022601 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:29.059049 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:29.059078 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:29.091968 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:29.092000 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:29.130700 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:29.130735 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:29.159953 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:29.159985 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:29.190155 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:29.190193 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:29.219224 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:29.219252 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:29.276865 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:29.276897 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:31.793338 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:31.803402 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:31.803474 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:31.828722 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:31.828743 1408373 cri.go:89] found id: ""
	I1217 01:39:31.828751 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:31.828813 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:31.832478 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:31.832557 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:31.858134 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:31.858154 1408373 cri.go:89] found id: ""
	I1217 01:39:31.858163 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:31.858217 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:31.862044 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:31.862124 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:31.886431 1408373 cri.go:89] found id: ""
	I1217 01:39:31.886466 1408373 logs.go:282] 0 containers: []
	W1217 01:39:31.886479 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:31.886485 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:31.886544 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:31.912844 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:31.912864 1408373 cri.go:89] found id: ""
	I1217 01:39:31.912872 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:31.912931 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:31.917027 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:31.917097 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:31.944511 1408373 cri.go:89] found id: ""
	I1217 01:39:31.944537 1408373 logs.go:282] 0 containers: []
	W1217 01:39:31.944545 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:31.944552 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:31.944613 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:31.974113 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:31.974133 1408373 cri.go:89] found id: ""
	I1217 01:39:31.974141 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:31.974197 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:31.978041 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:31.978112 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:32.012039 1408373 cri.go:89] found id: ""
	I1217 01:39:32.012062 1408373 logs.go:282] 0 containers: []
	W1217 01:39:32.012071 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:32.012077 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:32.012140 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:32.047143 1408373 cri.go:89] found id: ""
	I1217 01:39:32.047170 1408373 logs.go:282] 0 containers: []
	W1217 01:39:32.047179 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:32.047229 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:32.047247 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:32.110785 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:32.110864 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:32.200806 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:32.200830 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:32.200843 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:32.243610 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:32.243649 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:32.293063 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:32.293099 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:32.348562 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:32.348599 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:32.411135 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:32.411163 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:32.439155 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:32.439192 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:32.515125 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:32.515197 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:35.049756 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:35.062938 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:35.063033 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:35.093429 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:35.093457 1408373 cri.go:89] found id: ""
	I1217 01:39:35.093466 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:35.093528 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:35.098348 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:35.098430 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:35.139476 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:35.139503 1408373 cri.go:89] found id: ""
	I1217 01:39:35.139512 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:35.139571 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:35.144269 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:35.144359 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:35.176472 1408373 cri.go:89] found id: ""
	I1217 01:39:35.176501 1408373 logs.go:282] 0 containers: []
	W1217 01:39:35.176520 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:35.176527 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:35.176600 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:35.216028 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:35.216053 1408373 cri.go:89] found id: ""
	I1217 01:39:35.216069 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:35.216130 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:35.221028 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:35.221112 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:35.252324 1408373 cri.go:89] found id: ""
	I1217 01:39:35.252352 1408373 logs.go:282] 0 containers: []
	W1217 01:39:35.252370 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:35.252377 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:35.252458 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:35.295604 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:35.295629 1408373 cri.go:89] found id: ""
	I1217 01:39:35.295645 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:35.295701 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:35.300344 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:35.300434 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:35.340017 1408373 cri.go:89] found id: ""
	I1217 01:39:35.340059 1408373 logs.go:282] 0 containers: []
	W1217 01:39:35.340069 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:35.340076 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:35.340153 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:35.387957 1408373 cri.go:89] found id: ""
	I1217 01:39:35.387985 1408373 logs.go:282] 0 containers: []
	W1217 01:39:35.387994 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:35.388007 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:35.388021 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:35.428870 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:35.428906 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:35.496157 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:35.496198 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:35.530780 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:35.530812 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:35.595681 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:35.595719 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:35.611475 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:35.611514 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:35.676256 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:35.676294 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:35.725648 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:35.725684 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:35.817492 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:35.817556 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:35.817583 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:38.363925 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:38.375397 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:38.375468 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:38.411890 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:38.411913 1408373 cri.go:89] found id: ""
	I1217 01:39:38.411922 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:38.411978 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:38.417480 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:38.417558 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:38.456153 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:38.456176 1408373 cri.go:89] found id: ""
	I1217 01:39:38.456185 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:38.456245 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:38.460254 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:38.460327 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:38.494116 1408373 cri.go:89] found id: ""
	I1217 01:39:38.494149 1408373 logs.go:282] 0 containers: []
	W1217 01:39:38.494162 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:38.494168 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:38.494235 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:38.530146 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:38.530177 1408373 cri.go:89] found id: ""
	I1217 01:39:38.530186 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:38.530245 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:38.534659 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:38.534747 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:38.580325 1408373 cri.go:89] found id: ""
	I1217 01:39:38.580357 1408373 logs.go:282] 0 containers: []
	W1217 01:39:38.580367 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:38.580374 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:38.580435 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:38.632660 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:38.632682 1408373 cri.go:89] found id: ""
	I1217 01:39:38.632691 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:38.632750 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:38.641991 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:38.642069 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:38.726600 1408373 cri.go:89] found id: ""
	I1217 01:39:38.726628 1408373 logs.go:282] 0 containers: []
	W1217 01:39:38.726637 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:38.726644 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:38.726723 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:38.755388 1408373 cri.go:89] found id: ""
	I1217 01:39:38.755411 1408373 logs.go:282] 0 containers: []
	W1217 01:39:38.755420 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:38.755435 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:38.755448 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:38.809842 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:38.809879 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:38.856389 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:38.856416 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:38.900913 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:38.900954 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:38.934754 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:38.934794 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:39.004723 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:39.004768 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:39.021979 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:39.022008 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:39.089998 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:39.090020 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:39.090032 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:39.118944 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:39.118981 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:41.648695 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:41.663216 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:41.663293 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:41.699017 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:41.699043 1408373 cri.go:89] found id: ""
	I1217 01:39:41.699055 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:41.699120 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:41.703360 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:41.703433 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:41.729228 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:41.729249 1408373 cri.go:89] found id: ""
	I1217 01:39:41.729258 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:41.729324 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:41.733203 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:41.733283 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:41.760444 1408373 cri.go:89] found id: ""
	I1217 01:39:41.760466 1408373 logs.go:282] 0 containers: []
	W1217 01:39:41.760476 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:41.760482 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:41.760545 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:41.787374 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:41.787397 1408373 cri.go:89] found id: ""
	I1217 01:39:41.787406 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:41.787484 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:41.791408 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:41.791530 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:41.817732 1408373 cri.go:89] found id: ""
	I1217 01:39:41.817765 1408373 logs.go:282] 0 containers: []
	W1217 01:39:41.817775 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:41.817797 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:41.817901 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:41.847661 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:41.847686 1408373 cri.go:89] found id: ""
	I1217 01:39:41.847695 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:41.847750 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:41.851749 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:41.851824 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:41.877203 1408373 cri.go:89] found id: ""
	I1217 01:39:41.877225 1408373 logs.go:282] 0 containers: []
	W1217 01:39:41.877234 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:41.877241 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:41.877299 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:41.902745 1408373 cri.go:89] found id: ""
	I1217 01:39:41.902814 1408373 logs.go:282] 0 containers: []
	W1217 01:39:41.902831 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:41.902849 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:41.902866 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:41.964652 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:41.964674 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:41.964693 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:41.998320 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:41.998351 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:42.038883 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:42.038915 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:42.069357 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:42.069392 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:42.131306 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:42.131356 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:42.148754 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:42.148788 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:42.185781 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:42.185825 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:42.219772 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:42.219806 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:44.764008 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:44.774403 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:44.774513 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:44.799259 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:44.799292 1408373 cri.go:89] found id: ""
	I1217 01:39:44.799302 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:44.799396 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:44.803475 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:44.803608 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:44.829578 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:44.829602 1408373 cri.go:89] found id: ""
	I1217 01:39:44.829610 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:44.829751 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:44.833753 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:44.833855 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:44.860304 1408373 cri.go:89] found id: ""
	I1217 01:39:44.860330 1408373 logs.go:282] 0 containers: []
	W1217 01:39:44.860339 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:44.860345 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:44.860407 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:44.887443 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:44.887510 1408373 cri.go:89] found id: ""
	I1217 01:39:44.887531 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:44.887611 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:44.891384 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:44.891485 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:44.916201 1408373 cri.go:89] found id: ""
	I1217 01:39:44.916227 1408373 logs.go:282] 0 containers: []
	W1217 01:39:44.916236 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:44.916243 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:44.916380 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:44.941878 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:44.941901 1408373 cri.go:89] found id: ""
	I1217 01:39:44.941910 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:44.941985 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:44.945783 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:44.945865 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:44.976601 1408373 cri.go:89] found id: ""
	I1217 01:39:44.976679 1408373 logs.go:282] 0 containers: []
	W1217 01:39:44.976704 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:44.976723 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:44.976817 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:45.000956 1408373 cri.go:89] found id: ""
	I1217 01:39:45.001052 1408373 logs.go:282] 0 containers: []
	W1217 01:39:45.001074 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:45.001114 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:45.001143 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:45.070871 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:45.071267 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:45.118670 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:45.118764 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:45.247302 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:45.247329 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:45.247345 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:45.312685 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:45.312717 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:45.345459 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:45.345539 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:45.401515 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:45.401548 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:45.458732 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:45.458773 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:45.488300 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:45.488331 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:48.019344 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:48.030538 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:48.030616 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:48.057064 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:48.057085 1408373 cri.go:89] found id: ""
	I1217 01:39:48.057095 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:48.057153 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:48.061668 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:48.061751 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:48.087109 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:48.087130 1408373 cri.go:89] found id: ""
	I1217 01:39:48.087144 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:48.087227 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:48.090990 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:48.091091 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:48.116192 1408373 cri.go:89] found id: ""
	I1217 01:39:48.116216 1408373 logs.go:282] 0 containers: []
	W1217 01:39:48.116225 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:48.116252 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:48.116329 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:48.143722 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:48.143744 1408373 cri.go:89] found id: ""
	I1217 01:39:48.143753 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:48.143809 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:48.147830 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:48.147904 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:48.176116 1408373 cri.go:89] found id: ""
	I1217 01:39:48.176139 1408373 logs.go:282] 0 containers: []
	W1217 01:39:48.176148 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:48.176154 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:48.176211 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:48.204857 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:48.204879 1408373 cri.go:89] found id: ""
	I1217 01:39:48.204886 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:48.204941 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:48.208566 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:48.208634 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:48.233561 1408373 cri.go:89] found id: ""
	I1217 01:39:48.233586 1408373 logs.go:282] 0 containers: []
	W1217 01:39:48.233596 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:48.233602 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:48.233690 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:48.258829 1408373 cri.go:89] found id: ""
	I1217 01:39:48.258854 1408373 logs.go:282] 0 containers: []
	W1217 01:39:48.258864 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:48.258880 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:48.258891 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:48.285930 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:48.285959 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:48.343444 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:48.343479 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:48.381374 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:48.381402 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:48.422992 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:48.423023 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:48.438338 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:48.438366 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:48.503957 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:48.503978 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:48.503991 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:48.550322 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:48.550358 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:48.581184 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:48.581213 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:51.110574 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:51.121958 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:51.122033 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:51.149819 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:51.149843 1408373 cri.go:89] found id: ""
	I1217 01:39:51.149852 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:51.149913 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:51.154219 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:51.154303 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:51.180618 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:51.180640 1408373 cri.go:89] found id: ""
	I1217 01:39:51.180649 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:51.180708 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:51.184428 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:51.184502 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:51.209532 1408373 cri.go:89] found id: ""
	I1217 01:39:51.209561 1408373 logs.go:282] 0 containers: []
	W1217 01:39:51.209570 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:51.209577 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:51.209687 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:51.236784 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:51.236802 1408373 cri.go:89] found id: ""
	I1217 01:39:51.236811 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:51.236866 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:51.240627 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:51.240704 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:51.267767 1408373 cri.go:89] found id: ""
	I1217 01:39:51.267801 1408373 logs.go:282] 0 containers: []
	W1217 01:39:51.267811 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:51.267819 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:51.267906 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:51.294273 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:51.294296 1408373 cri.go:89] found id: ""
	I1217 01:39:51.294306 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:51.294367 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:51.298353 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:51.298461 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:51.326067 1408373 cri.go:89] found id: ""
	I1217 01:39:51.326089 1408373 logs.go:282] 0 containers: []
	W1217 01:39:51.326098 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:51.326104 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:51.326162 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:51.350121 1408373 cri.go:89] found id: ""
	I1217 01:39:51.350143 1408373 logs.go:282] 0 containers: []
	W1217 01:39:51.350152 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:51.350165 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:51.350176 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:51.364529 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:51.364556 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:51.457579 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:51.457673 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:51.457693 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:51.491847 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:51.491878 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:51.527107 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:51.527138 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:51.567712 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:51.567740 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:51.625999 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:51.626033 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:51.661998 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:51.662031 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:51.704455 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:51.704487 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:54.235050 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:54.247782 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:54.247856 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:54.274840 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:54.274863 1408373 cri.go:89] found id: ""
	I1217 01:39:54.274871 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:54.274927 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:54.278823 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:54.278898 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:54.305303 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:54.305325 1408373 cri.go:89] found id: ""
	I1217 01:39:54.305333 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:54.305390 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:54.309285 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:54.309357 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:54.335151 1408373 cri.go:89] found id: ""
	I1217 01:39:54.335175 1408373 logs.go:282] 0 containers: []
	W1217 01:39:54.335183 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:54.335190 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:54.335258 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:54.361867 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:54.361890 1408373 cri.go:89] found id: ""
	I1217 01:39:54.361898 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:54.361953 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:54.365583 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:54.365684 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:54.393921 1408373 cri.go:89] found id: ""
	I1217 01:39:54.393947 1408373 logs.go:282] 0 containers: []
	W1217 01:39:54.393956 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:54.393969 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:54.394029 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:54.419497 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:54.419519 1408373 cri.go:89] found id: ""
	I1217 01:39:54.419528 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:54.419613 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:54.423631 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:54.423730 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:54.449311 1408373 cri.go:89] found id: ""
	I1217 01:39:54.449335 1408373 logs.go:282] 0 containers: []
	W1217 01:39:54.449344 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:54.449351 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:54.449439 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:54.479569 1408373 cri.go:89] found id: ""
	I1217 01:39:54.479640 1408373 logs.go:282] 0 containers: []
	W1217 01:39:54.479656 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:54.479671 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:54.479690 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:54.520977 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:54.521004 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:54.580195 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:54.580229 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:54.595367 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:54.595396 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:54.661257 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:54.661279 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:54.661294 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:54.694988 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:54.695021 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:54.726473 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:54.726503 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:54.756379 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:54.756406 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:54.792399 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:54.792428 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:39:57.325769 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:39:57.337522 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:39:57.337593 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:39:57.406112 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:57.406135 1408373 cri.go:89] found id: ""
	I1217 01:39:57.406146 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:39:57.406201 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:57.410354 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:39:57.410424 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:39:57.444184 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:57.444208 1408373 cri.go:89] found id: ""
	I1217 01:39:57.444217 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:39:57.444274 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:57.447999 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:39:57.448067 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:39:57.491505 1408373 cri.go:89] found id: ""
	I1217 01:39:57.491529 1408373 logs.go:282] 0 containers: []
	W1217 01:39:57.491538 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:39:57.491545 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:39:57.491604 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:39:57.528674 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:57.528704 1408373 cri.go:89] found id: ""
	I1217 01:39:57.528713 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:39:57.528773 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:57.535021 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:39:57.535091 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:39:57.576917 1408373 cri.go:89] found id: ""
	I1217 01:39:57.576943 1408373 logs.go:282] 0 containers: []
	W1217 01:39:57.576958 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:39:57.576965 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:39:57.577029 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:39:57.605618 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:57.605659 1408373 cri.go:89] found id: ""
	I1217 01:39:57.605667 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:39:57.605737 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:39:57.610078 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:39:57.610152 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:39:57.643819 1408373 cri.go:89] found id: ""
	I1217 01:39:57.643891 1408373 logs.go:282] 0 containers: []
	W1217 01:39:57.643912 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:39:57.643930 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:39:57.644019 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:39:57.685668 1408373 cri.go:89] found id: ""
	I1217 01:39:57.685689 1408373 logs.go:282] 0 containers: []
	W1217 01:39:57.685698 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:39:57.685720 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:39:57.685733 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:39:57.771605 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:39:57.771672 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:39:57.771699 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:39:57.823330 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:39:57.823417 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:39:57.868836 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:39:57.868874 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:39:57.900064 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:39:57.900099 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:39:57.946529 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:39:57.946555 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:39:58.006765 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:39:58.006815 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:39:58.024168 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:39:58.024194 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:39:58.083939 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:39:58.083984 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:00.616228 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:00.642186 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:00.642265 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:00.704624 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:00.704645 1408373 cri.go:89] found id: ""
	I1217 01:40:00.704654 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:00.704719 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:00.714548 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:00.714632 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:00.775576 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:00.775597 1408373 cri.go:89] found id: ""
	I1217 01:40:00.775606 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:00.775664 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:00.780541 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:00.780618 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:00.817784 1408373 cri.go:89] found id: ""
	I1217 01:40:00.817812 1408373 logs.go:282] 0 containers: []
	W1217 01:40:00.817821 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:00.817828 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:00.817898 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:00.864234 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:00.864254 1408373 cri.go:89] found id: ""
	I1217 01:40:00.864263 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:00.864323 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:00.869089 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:00.869157 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:00.899942 1408373 cri.go:89] found id: ""
	I1217 01:40:00.900004 1408373 logs.go:282] 0 containers: []
	W1217 01:40:00.900028 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:00.900046 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:00.900120 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:00.935144 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:00.935202 1408373 cri.go:89] found id: ""
	I1217 01:40:00.935224 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:00.935294 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:00.940221 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:00.940330 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:00.976687 1408373 cri.go:89] found id: ""
	I1217 01:40:00.976747 1408373 logs.go:282] 0 containers: []
	W1217 01:40:00.976772 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:00.976790 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:00.976866 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:01.019056 1408373 cri.go:89] found id: ""
	I1217 01:40:01.019127 1408373 logs.go:282] 0 containers: []
	W1217 01:40:01.019152 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:01.019179 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:01.019209 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:01.064564 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:01.064642 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:01.101346 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:01.101430 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:01.174376 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:01.174462 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:01.215075 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:01.215160 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:01.253745 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:01.253807 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:01.274707 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:01.274790 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:01.368205 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:01.368246 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:01.368260 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:01.422613 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:01.422650 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:03.968126 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:03.978719 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:03.978795 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:04.005490 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:04.005513 1408373 cri.go:89] found id: ""
	I1217 01:40:04.005522 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:04.005595 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:04.010328 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:04.010414 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:04.036569 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:04.036592 1408373 cri.go:89] found id: ""
	I1217 01:40:04.036601 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:04.036659 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:04.040571 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:04.040645 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:04.066828 1408373 cri.go:89] found id: ""
	I1217 01:40:04.066853 1408373 logs.go:282] 0 containers: []
	W1217 01:40:04.066862 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:04.066869 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:04.066974 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:04.102754 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:04.102778 1408373 cri.go:89] found id: ""
	I1217 01:40:04.102787 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:04.102849 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:04.106930 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:04.107004 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:04.133953 1408373 cri.go:89] found id: ""
	I1217 01:40:04.133981 1408373 logs.go:282] 0 containers: []
	W1217 01:40:04.133990 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:04.133998 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:04.134062 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:04.159732 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:04.159756 1408373 cri.go:89] found id: ""
	I1217 01:40:04.159764 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:04.159822 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:04.163893 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:04.163968 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:04.194625 1408373 cri.go:89] found id: ""
	I1217 01:40:04.194649 1408373 logs.go:282] 0 containers: []
	W1217 01:40:04.194658 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:04.194665 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:04.194725 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:04.220950 1408373 cri.go:89] found id: ""
	I1217 01:40:04.220975 1408373 logs.go:282] 0 containers: []
	W1217 01:40:04.220984 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:04.220998 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:04.221013 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:04.259085 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:04.259119 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:04.302960 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:04.302992 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:04.369525 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:04.369564 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:04.458349 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:04.458371 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:04.458385 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:04.509318 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:04.509370 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:04.561861 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:04.561888 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:04.626714 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:04.626792 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:04.647265 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:04.647342 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:07.222168 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:07.232532 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:07.232603 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:07.260793 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:07.260815 1408373 cri.go:89] found id: ""
	I1217 01:40:07.260824 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:07.260883 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:07.264729 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:07.264804 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:07.291177 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:07.291199 1408373 cri.go:89] found id: ""
	I1217 01:40:07.291207 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:07.291262 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:07.295142 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:07.295217 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:07.320397 1408373 cri.go:89] found id: ""
	I1217 01:40:07.320418 1408373 logs.go:282] 0 containers: []
	W1217 01:40:07.320427 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:07.320433 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:07.320492 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:07.346299 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:07.346320 1408373 cri.go:89] found id: ""
	I1217 01:40:07.346329 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:07.346388 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:07.350183 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:07.350260 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:07.376471 1408373 cri.go:89] found id: ""
	I1217 01:40:07.376495 1408373 logs.go:282] 0 containers: []
	W1217 01:40:07.376503 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:07.376510 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:07.376583 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:07.402455 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:07.402477 1408373 cri.go:89] found id: ""
	I1217 01:40:07.402486 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:07.402547 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:07.406377 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:07.406449 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:07.430476 1408373 cri.go:89] found id: ""
	I1217 01:40:07.430503 1408373 logs.go:282] 0 containers: []
	W1217 01:40:07.430511 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:07.430518 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:07.430580 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:07.456208 1408373 cri.go:89] found id: ""
	I1217 01:40:07.456235 1408373 logs.go:282] 0 containers: []
	W1217 01:40:07.456245 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:07.456260 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:07.456273 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:07.471759 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:07.471787 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:07.534396 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:07.534417 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:07.534431 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:07.572147 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:07.572177 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:07.604938 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:07.604969 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:07.659695 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:07.659721 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:07.727566 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:07.727605 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:07.761759 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:07.761789 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:07.794366 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:07.794396 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:10.324559 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:10.334991 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:10.335063 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:10.361342 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:10.361363 1408373 cri.go:89] found id: ""
	I1217 01:40:10.361372 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:10.361431 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:10.365236 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:10.365305 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:10.391656 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:10.391676 1408373 cri.go:89] found id: ""
	I1217 01:40:10.391684 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:10.391741 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:10.395791 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:10.395870 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:10.420809 1408373 cri.go:89] found id: ""
	I1217 01:40:10.420832 1408373 logs.go:282] 0 containers: []
	W1217 01:40:10.420841 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:10.420847 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:10.420905 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:10.446336 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:10.446358 1408373 cri.go:89] found id: ""
	I1217 01:40:10.446367 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:10.446427 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:10.450545 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:10.450623 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:10.481143 1408373 cri.go:89] found id: ""
	I1217 01:40:10.481165 1408373 logs.go:282] 0 containers: []
	W1217 01:40:10.481174 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:10.481181 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:10.481241 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:10.507381 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:10.507409 1408373 cri.go:89] found id: ""
	I1217 01:40:10.507417 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:10.507479 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:10.511435 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:10.511511 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:10.542980 1408373 cri.go:89] found id: ""
	I1217 01:40:10.543006 1408373 logs.go:282] 0 containers: []
	W1217 01:40:10.543015 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:10.543021 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:10.543089 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:10.575292 1408373 cri.go:89] found id: ""
	I1217 01:40:10.575316 1408373 logs.go:282] 0 containers: []
	W1217 01:40:10.575326 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:10.575380 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:10.575391 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:10.605068 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:10.605107 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:10.652556 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:10.652632 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:10.694607 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:10.694641 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:10.734913 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:10.734950 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:10.771217 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:10.771248 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:10.809150 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:10.809182 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:10.874776 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:10.874813 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:10.891052 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:10.891079 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:10.960166 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:13.460444 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:13.471424 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:13.471498 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:13.497057 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:13.497078 1408373 cri.go:89] found id: ""
	I1217 01:40:13.497092 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:13.497148 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:13.500975 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:13.501048 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:13.526656 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:13.526679 1408373 cri.go:89] found id: ""
	I1217 01:40:13.526693 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:13.526751 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:13.530658 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:13.530735 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:13.555596 1408373 cri.go:89] found id: ""
	I1217 01:40:13.555620 1408373 logs.go:282] 0 containers: []
	W1217 01:40:13.555629 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:13.555635 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:13.555698 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:13.597930 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:13.597951 1408373 cri.go:89] found id: ""
	I1217 01:40:13.597960 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:13.598017 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:13.601813 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:13.601889 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:13.631655 1408373 cri.go:89] found id: ""
	I1217 01:40:13.631677 1408373 logs.go:282] 0 containers: []
	W1217 01:40:13.631685 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:13.631692 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:13.631751 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:13.660594 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:13.660668 1408373 cri.go:89] found id: ""
	I1217 01:40:13.660693 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:13.660773 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:13.665307 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:13.665380 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:13.696866 1408373 cri.go:89] found id: ""
	I1217 01:40:13.696889 1408373 logs.go:282] 0 containers: []
	W1217 01:40:13.696898 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:13.696904 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:13.696963 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:13.722062 1408373 cri.go:89] found id: ""
	I1217 01:40:13.722091 1408373 logs.go:282] 0 containers: []
	W1217 01:40:13.722100 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:13.722114 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:13.722126 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:13.737125 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:13.737214 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:13.806823 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:13.806849 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:13.806864 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:13.842061 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:13.842093 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:13.881824 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:13.881857 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:13.924705 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:13.924737 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:13.954328 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:13.954358 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:14.017425 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:14.017464 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:14.066791 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:14.066824 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:16.598738 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:16.608879 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:16.608960 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:16.651872 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:16.651896 1408373 cri.go:89] found id: ""
	I1217 01:40:16.651905 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:16.651960 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:16.656324 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:16.656401 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:16.686435 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:16.686461 1408373 cri.go:89] found id: ""
	I1217 01:40:16.686469 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:16.686528 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:16.692497 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:16.692577 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:16.718758 1408373 cri.go:89] found id: ""
	I1217 01:40:16.718783 1408373 logs.go:282] 0 containers: []
	W1217 01:40:16.718792 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:16.718800 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:16.718868 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:16.746026 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:16.746048 1408373 cri.go:89] found id: ""
	I1217 01:40:16.746057 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:16.746114 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:16.749936 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:16.750015 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:16.780863 1408373 cri.go:89] found id: ""
	I1217 01:40:16.780897 1408373 logs.go:282] 0 containers: []
	W1217 01:40:16.780906 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:16.780912 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:16.780971 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:16.806856 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:16.806882 1408373 cri.go:89] found id: ""
	I1217 01:40:16.806890 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:16.806968 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:16.811018 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:16.811099 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:16.836282 1408373 cri.go:89] found id: ""
	I1217 01:40:16.836311 1408373 logs.go:282] 0 containers: []
	W1217 01:40:16.836320 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:16.836327 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:16.836387 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:16.862333 1408373 cri.go:89] found id: ""
	I1217 01:40:16.862356 1408373 logs.go:282] 0 containers: []
	W1217 01:40:16.862366 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:16.862378 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:16.862390 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:16.915279 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:16.915306 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:16.982716 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:16.982741 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:16.982755 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:17.027012 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:17.027055 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:17.070040 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:17.070071 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:17.104055 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:17.104086 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:17.165251 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:17.165288 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:17.180829 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:17.180859 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:17.219576 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:17.219607 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:19.751200 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:19.761969 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:19.762047 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:19.792180 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:19.792205 1408373 cri.go:89] found id: ""
	I1217 01:40:19.792213 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:19.792271 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:19.796137 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:19.796211 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:19.822135 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:19.822155 1408373 cri.go:89] found id: ""
	I1217 01:40:19.822164 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:19.822223 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:19.826078 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:19.826148 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:19.852001 1408373 cri.go:89] found id: ""
	I1217 01:40:19.852079 1408373 logs.go:282] 0 containers: []
	W1217 01:40:19.852103 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:19.852122 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:19.852220 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:19.880014 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:19.880036 1408373 cri.go:89] found id: ""
	I1217 01:40:19.880044 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:19.880105 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:19.884146 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:19.884272 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:19.912778 1408373 cri.go:89] found id: ""
	I1217 01:40:19.912805 1408373 logs.go:282] 0 containers: []
	W1217 01:40:19.912814 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:19.912821 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:19.912884 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:19.947076 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:19.947097 1408373 cri.go:89] found id: ""
	I1217 01:40:19.947105 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:19.947166 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:19.951376 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:19.951452 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:19.977584 1408373 cri.go:89] found id: ""
	I1217 01:40:19.977614 1408373 logs.go:282] 0 containers: []
	W1217 01:40:19.977626 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:19.977635 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:19.977792 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:20.013260 1408373 cri.go:89] found id: ""
	I1217 01:40:20.013286 1408373 logs.go:282] 0 containers: []
	W1217 01:40:20.013295 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:20.013310 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:20.013322 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:20.052437 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:20.052474 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:20.085986 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:20.086015 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:20.116255 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:20.116290 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:20.177173 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:20.177209 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:20.193802 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:20.193878 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:20.228548 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:20.228581 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:20.258770 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:20.258801 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:20.326888 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:20.326951 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:20.326971 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:22.860222 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:22.871477 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:22.871580 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:22.901251 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:22.901274 1408373 cri.go:89] found id: ""
	I1217 01:40:22.901282 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:22.901352 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:22.905683 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:22.905766 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:22.932731 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:22.932764 1408373 cri.go:89] found id: ""
	I1217 01:40:22.932779 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:22.932860 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:22.936916 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:22.936989 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:22.964611 1408373 cri.go:89] found id: ""
	I1217 01:40:22.964648 1408373 logs.go:282] 0 containers: []
	W1217 01:40:22.964663 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:22.964674 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:22.964761 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:23.002491 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:23.002517 1408373 cri.go:89] found id: ""
	I1217 01:40:23.002530 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:23.002595 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:23.007908 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:23.007984 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:23.036348 1408373 cri.go:89] found id: ""
	I1217 01:40:23.036380 1408373 logs.go:282] 0 containers: []
	W1217 01:40:23.036389 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:23.036395 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:23.036454 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:23.064703 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:23.064729 1408373 cri.go:89] found id: ""
	I1217 01:40:23.064739 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:23.064802 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:23.068739 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:23.068818 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:23.101739 1408373 cri.go:89] found id: ""
	I1217 01:40:23.101765 1408373 logs.go:282] 0 containers: []
	W1217 01:40:23.101776 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:23.101782 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:23.101843 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:23.127976 1408373 cri.go:89] found id: ""
	I1217 01:40:23.128007 1408373 logs.go:282] 0 containers: []
	W1217 01:40:23.128016 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:23.128039 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:23.128051 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:23.156944 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:23.156981 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:23.215612 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:23.215649 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:23.257867 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:23.257898 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:23.294565 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:23.294601 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:23.309133 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:23.309163 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:23.383510 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:23.383534 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:23.383548 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:23.440951 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:23.440992 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:23.479803 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:23.479838 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:26.010446 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:26.028409 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:26.028489 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:26.067226 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:26.067252 1408373 cri.go:89] found id: ""
	I1217 01:40:26.067261 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:26.067323 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:26.071395 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:26.071470 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:26.098753 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:26.098773 1408373 cri.go:89] found id: ""
	I1217 01:40:26.098781 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:26.098838 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:26.102917 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:26.102999 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:26.130009 1408373 cri.go:89] found id: ""
	I1217 01:40:26.130032 1408373 logs.go:282] 0 containers: []
	W1217 01:40:26.130040 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:26.130047 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:26.130111 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:26.159250 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:26.159273 1408373 cri.go:89] found id: ""
	I1217 01:40:26.159281 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:26.159368 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:26.163349 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:26.163454 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:26.189159 1408373 cri.go:89] found id: ""
	I1217 01:40:26.189183 1408373 logs.go:282] 0 containers: []
	W1217 01:40:26.189192 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:26.189198 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:26.189255 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:26.214180 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:26.214199 1408373 cri.go:89] found id: ""
	I1217 01:40:26.214214 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:26.214271 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:26.218343 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:26.218427 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:26.247121 1408373 cri.go:89] found id: ""
	I1217 01:40:26.247147 1408373 logs.go:282] 0 containers: []
	W1217 01:40:26.247157 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:26.247163 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:26.247223 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:26.272258 1408373 cri.go:89] found id: ""
	I1217 01:40:26.272287 1408373 logs.go:282] 0 containers: []
	W1217 01:40:26.272296 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:26.272313 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:26.272355 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:26.330539 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:26.330574 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:26.345283 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:26.345309 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:26.389451 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:26.389484 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:26.421056 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:26.421087 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:26.458438 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:26.458477 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:26.522346 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:26.522368 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:26.522383 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:26.556632 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:26.556665 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:26.591707 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:26.591740 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:29.121217 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:29.132724 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:29.132800 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:29.167670 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:29.167691 1408373 cri.go:89] found id: ""
	I1217 01:40:29.167700 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:29.167756 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:29.172716 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:29.172786 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:29.209828 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:29.209855 1408373 cri.go:89] found id: ""
	I1217 01:40:29.209864 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:29.209919 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:29.214607 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:29.214682 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:29.244133 1408373 cri.go:89] found id: ""
	I1217 01:40:29.244166 1408373 logs.go:282] 0 containers: []
	W1217 01:40:29.244175 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:29.244182 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:29.244246 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:29.279007 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:29.279029 1408373 cri.go:89] found id: ""
	I1217 01:40:29.279037 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:29.279093 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:29.283228 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:29.283301 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:29.319007 1408373 cri.go:89] found id: ""
	I1217 01:40:29.319034 1408373 logs.go:282] 0 containers: []
	W1217 01:40:29.319043 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:29.319049 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:29.319109 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:29.350755 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:29.350775 1408373 cri.go:89] found id: ""
	I1217 01:40:29.350783 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:29.350841 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:29.355272 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:29.355342 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:29.417239 1408373 cri.go:89] found id: ""
	I1217 01:40:29.417262 1408373 logs.go:282] 0 containers: []
	W1217 01:40:29.417271 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:29.417277 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:29.417337 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:29.466093 1408373 cri.go:89] found id: ""
	I1217 01:40:29.466124 1408373 logs.go:282] 0 containers: []
	W1217 01:40:29.466141 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:29.466160 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:29.466177 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:29.525544 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:29.525577 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:29.558313 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:29.558353 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:29.590589 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:29.590619 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:29.652890 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:29.652926 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:29.668046 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:29.668073 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:29.702081 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:29.702112 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:29.732793 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:29.732822 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:29.803488 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:29.803509 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:29.803523 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:32.357817 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:32.385090 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:32.385157 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:32.462416 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:32.462433 1408373 cri.go:89] found id: ""
	I1217 01:40:32.462441 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:32.462496 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:32.466879 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:32.466947 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:32.499829 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:32.499848 1408373 cri.go:89] found id: ""
	I1217 01:40:32.499856 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:32.499914 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:32.504058 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:32.504125 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:32.531963 1408373 cri.go:89] found id: ""
	I1217 01:40:32.531985 1408373 logs.go:282] 0 containers: []
	W1217 01:40:32.531994 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:32.532000 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:32.532057 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:32.559787 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:32.559806 1408373 cri.go:89] found id: ""
	I1217 01:40:32.559814 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:32.559867 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:32.564325 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:32.564396 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:32.591951 1408373 cri.go:89] found id: ""
	I1217 01:40:32.591972 1408373 logs.go:282] 0 containers: []
	W1217 01:40:32.591981 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:32.591987 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:32.592051 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:32.639813 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:32.639832 1408373 cri.go:89] found id: ""
	I1217 01:40:32.639840 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:32.639893 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:32.644068 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:32.644213 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:32.677224 1408373 cri.go:89] found id: ""
	I1217 01:40:32.677246 1408373 logs.go:282] 0 containers: []
	W1217 01:40:32.677256 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:32.677262 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:32.677320 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:32.708433 1408373 cri.go:89] found id: ""
	I1217 01:40:32.708456 1408373 logs.go:282] 0 containers: []
	W1217 01:40:32.708465 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:32.708478 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:32.708489 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:32.748897 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:32.748964 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:32.765453 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:32.765521 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:32.806166 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:32.806198 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:32.850714 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:32.850744 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:32.908686 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:32.908720 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:32.948113 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:32.948154 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:32.979889 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:32.979928 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:33.057482 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:33.057592 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:33.156917 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:35.657191 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:35.667568 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:35.667648 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:35.693394 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:35.693414 1408373 cri.go:89] found id: ""
	I1217 01:40:35.693422 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:35.693477 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:35.697240 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:35.697305 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:35.727498 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:35.727517 1408373 cri.go:89] found id: ""
	I1217 01:40:35.727526 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:35.727580 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:35.731667 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:35.731736 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:35.769543 1408373 cri.go:89] found id: ""
	I1217 01:40:35.769617 1408373 logs.go:282] 0 containers: []
	W1217 01:40:35.769663 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:35.769689 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:35.769786 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:35.799687 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:35.799706 1408373 cri.go:89] found id: ""
	I1217 01:40:35.799715 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:35.799769 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:35.804067 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:35.804142 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:35.837393 1408373 cri.go:89] found id: ""
	I1217 01:40:35.837415 1408373 logs.go:282] 0 containers: []
	W1217 01:40:35.837423 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:35.837429 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:35.837490 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:35.873154 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:35.873173 1408373 cri.go:89] found id: ""
	I1217 01:40:35.873181 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:35.873241 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:35.877866 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:35.877997 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:35.916271 1408373 cri.go:89] found id: ""
	I1217 01:40:35.916341 1408373 logs.go:282] 0 containers: []
	W1217 01:40:35.916365 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:35.916383 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:35.916471 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:35.958260 1408373 cri.go:89] found id: ""
	I1217 01:40:35.958353 1408373 logs.go:282] 0 containers: []
	W1217 01:40:35.958384 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:35.958429 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:35.958460 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:36.034671 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:36.034702 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:36.141285 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:36.141308 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:36.141322 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:36.201375 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:36.201407 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:36.257542 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:36.257572 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:36.281115 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:36.281146 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:36.331728 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:36.331761 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:36.363903 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:36.363933 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:36.396655 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:36.401793 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:38.965248 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:38.977299 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:38.977372 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:39.018928 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:39.018948 1408373 cri.go:89] found id: ""
	I1217 01:40:39.018957 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:39.019015 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:39.023103 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:39.023179 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:39.048842 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:39.048860 1408373 cri.go:89] found id: ""
	I1217 01:40:39.048868 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:39.048928 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:39.052908 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:39.052982 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:39.079606 1408373 cri.go:89] found id: ""
	I1217 01:40:39.079628 1408373 logs.go:282] 0 containers: []
	W1217 01:40:39.079637 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:39.079643 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:39.079703 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:39.105876 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:39.105897 1408373 cri.go:89] found id: ""
	I1217 01:40:39.105905 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:39.105961 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:39.109739 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:39.109809 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:39.139737 1408373 cri.go:89] found id: ""
	I1217 01:40:39.139758 1408373 logs.go:282] 0 containers: []
	W1217 01:40:39.139766 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:39.139773 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:39.139831 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:39.175146 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:39.175220 1408373 cri.go:89] found id: ""
	I1217 01:40:39.175243 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:39.175325 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:39.179898 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:39.179967 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:39.207813 1408373 cri.go:89] found id: ""
	I1217 01:40:39.207880 1408373 logs.go:282] 0 containers: []
	W1217 01:40:39.207905 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:39.207918 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:39.207990 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:39.236478 1408373 cri.go:89] found id: ""
	I1217 01:40:39.236506 1408373 logs.go:282] 0 containers: []
	W1217 01:40:39.236515 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:39.236530 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:39.236542 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:39.265179 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:39.265206 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:39.322153 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:39.322194 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:39.337492 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:39.337523 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:39.372239 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:39.372274 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:39.404991 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:39.405022 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:39.441759 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:39.441837 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:39.494213 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:39.494248 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:39.596292 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:39.596314 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:39.596330 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:42.135977 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:42.148409 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:42.148494 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:42.191465 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:42.191491 1408373 cri.go:89] found id: ""
	I1217 01:40:42.191500 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:42.191562 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:42.196771 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:42.196860 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:42.228992 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:42.229016 1408373 cri.go:89] found id: ""
	I1217 01:40:42.229025 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:42.229091 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:42.234419 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:42.234502 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:42.267299 1408373 cri.go:89] found id: ""
	I1217 01:40:42.267327 1408373 logs.go:282] 0 containers: []
	W1217 01:40:42.267336 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:42.267343 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:42.267437 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:42.296847 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:42.296872 1408373 cri.go:89] found id: ""
	I1217 01:40:42.296881 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:42.296943 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:42.301129 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:42.301209 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:42.328304 1408373 cri.go:89] found id: ""
	I1217 01:40:42.328331 1408373 logs.go:282] 0 containers: []
	W1217 01:40:42.328341 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:42.328347 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:42.328409 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:42.359141 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:42.359163 1408373 cri.go:89] found id: ""
	I1217 01:40:42.359178 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:42.359252 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:42.363420 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:42.363538 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:42.390125 1408373 cri.go:89] found id: ""
	I1217 01:40:42.390150 1408373 logs.go:282] 0 containers: []
	W1217 01:40:42.390158 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:42.390164 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:42.390225 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:42.419130 1408373 cri.go:89] found id: ""
	I1217 01:40:42.419155 1408373 logs.go:282] 0 containers: []
	W1217 01:40:42.419164 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:42.419177 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:42.419217 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:42.456099 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:42.456153 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:42.471179 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:42.471208 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:42.507350 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:42.507383 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:42.542098 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:42.542130 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:42.578960 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:42.578995 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:42.607900 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:42.607936 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:42.636392 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:42.636420 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:42.693660 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:42.693695 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:42.766612 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:45.267219 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:45.286103 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:45.286187 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:45.358871 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:45.358894 1408373 cri.go:89] found id: ""
	I1217 01:40:45.358902 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:45.358964 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:45.365609 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:45.365692 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:45.404162 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:45.404185 1408373 cri.go:89] found id: ""
	I1217 01:40:45.404193 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:45.404251 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:45.408214 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:45.408288 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:45.485406 1408373 cri.go:89] found id: ""
	I1217 01:40:45.485431 1408373 logs.go:282] 0 containers: []
	W1217 01:40:45.485440 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:45.485446 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:45.485508 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:45.528564 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:45.528593 1408373 cri.go:89] found id: ""
	I1217 01:40:45.528602 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:45.528686 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:45.538179 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:45.538287 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:45.582650 1408373 cri.go:89] found id: ""
	I1217 01:40:45.582675 1408373 logs.go:282] 0 containers: []
	W1217 01:40:45.582690 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:45.582720 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:45.582799 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:45.626758 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:45.626780 1408373 cri.go:89] found id: ""
	I1217 01:40:45.626788 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:45.626877 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:45.630816 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:45.630934 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:45.680722 1408373 cri.go:89] found id: ""
	I1217 01:40:45.680758 1408373 logs.go:282] 0 containers: []
	W1217 01:40:45.680767 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:45.680773 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:45.680870 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:45.723067 1408373 cri.go:89] found id: ""
	I1217 01:40:45.723104 1408373 logs.go:282] 0 containers: []
	W1217 01:40:45.723113 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:45.723142 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:45.723168 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:45.882364 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:45.882433 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:45.882461 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:45.969104 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:45.969183 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:46.059450 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:46.059535 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:46.122045 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:46.122116 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:46.173395 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:46.173471 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:46.205427 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:46.205459 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:46.244121 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:46.244147 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:46.306790 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:46.306826 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:48.822075 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:48.832216 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:48.832305 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:48.856272 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:48.856293 1408373 cri.go:89] found id: ""
	I1217 01:40:48.856302 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:48.856383 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:48.860175 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:48.860268 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:48.888570 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:48.888591 1408373 cri.go:89] found id: ""
	I1217 01:40:48.888600 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:48.888665 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:48.892428 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:48.892500 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:48.920285 1408373 cri.go:89] found id: ""
	I1217 01:40:48.920307 1408373 logs.go:282] 0 containers: []
	W1217 01:40:48.920315 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:48.920326 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:48.920385 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:48.945830 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:48.945860 1408373 cri.go:89] found id: ""
	I1217 01:40:48.945869 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:48.945960 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:48.949882 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:48.949958 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:48.975907 1408373 cri.go:89] found id: ""
	I1217 01:40:48.975930 1408373 logs.go:282] 0 containers: []
	W1217 01:40:48.975939 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:48.975947 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:48.976006 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:49.012739 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:49.012761 1408373 cri.go:89] found id: ""
	I1217 01:40:49.012769 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:49.012825 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:49.016706 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:49.016777 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:49.045833 1408373 cri.go:89] found id: ""
	I1217 01:40:49.045859 1408373 logs.go:282] 0 containers: []
	W1217 01:40:49.045869 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:49.045875 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:49.045935 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:49.071833 1408373 cri.go:89] found id: ""
	I1217 01:40:49.071857 1408373 logs.go:282] 0 containers: []
	W1217 01:40:49.071866 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:49.071882 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:49.071924 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:49.105888 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:49.105918 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:49.162051 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:49.162087 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:49.230165 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:49.230199 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:49.245996 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:49.246025 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:49.313475 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:49.313494 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:49.313507 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:49.345464 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:49.345495 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:49.383454 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:49.383485 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:49.412370 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:49.412404 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:51.942034 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:51.952511 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:51.952589 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:51.994761 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:51.994784 1408373 cri.go:89] found id: ""
	I1217 01:40:51.994793 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:51.994850 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:52.005447 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:52.005529 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:52.039063 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:52.039088 1408373 cri.go:89] found id: ""
	I1217 01:40:52.039097 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:52.039182 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:52.043236 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:52.043311 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:52.069082 1408373 cri.go:89] found id: ""
	I1217 01:40:52.069110 1408373 logs.go:282] 0 containers: []
	W1217 01:40:52.069120 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:52.069127 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:52.069189 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:52.104713 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:52.104738 1408373 cri.go:89] found id: ""
	I1217 01:40:52.104747 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:52.104824 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:52.108893 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:52.108997 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:52.140036 1408373 cri.go:89] found id: ""
	I1217 01:40:52.140060 1408373 logs.go:282] 0 containers: []
	W1217 01:40:52.140069 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:52.140107 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:52.140188 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:52.182684 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:52.182707 1408373 cri.go:89] found id: ""
	I1217 01:40:52.182715 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:52.182800 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:52.186883 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:52.186984 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:52.217074 1408373 cri.go:89] found id: ""
	I1217 01:40:52.217105 1408373 logs.go:282] 0 containers: []
	W1217 01:40:52.217114 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:52.217120 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:52.217211 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:52.243115 1408373 cri.go:89] found id: ""
	I1217 01:40:52.243138 1408373 logs.go:282] 0 containers: []
	W1217 01:40:52.243147 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:52.243183 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:52.243207 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:52.310149 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:52.310170 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:52.310189 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:52.344182 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:52.344211 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:52.359281 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:52.359307 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:52.395451 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:52.395483 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:52.429022 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:52.429054 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:52.459242 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:52.459272 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:52.489281 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:52.489324 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:52.518861 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:52.518889 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:55.078492 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:55.091289 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:55.091372 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:55.130752 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:55.130775 1408373 cri.go:89] found id: ""
	I1217 01:40:55.130784 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:55.130849 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:55.136165 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:55.136245 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:55.170729 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:55.170807 1408373 cri.go:89] found id: ""
	I1217 01:40:55.170835 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:55.170923 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:55.176108 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:55.176181 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:55.211595 1408373 cri.go:89] found id: ""
	I1217 01:40:55.211620 1408373 logs.go:282] 0 containers: []
	W1217 01:40:55.211629 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:55.211635 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:55.211693 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:55.240445 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:55.240473 1408373 cri.go:89] found id: ""
	I1217 01:40:55.240482 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:55.240540 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:55.244382 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:55.244461 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:55.269789 1408373 cri.go:89] found id: ""
	I1217 01:40:55.269863 1408373 logs.go:282] 0 containers: []
	W1217 01:40:55.269880 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:55.269887 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:55.269951 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:55.296690 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:55.296713 1408373 cri.go:89] found id: ""
	I1217 01:40:55.296721 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:55.296810 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:55.300665 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:55.300762 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:55.324027 1408373 cri.go:89] found id: ""
	I1217 01:40:55.324052 1408373 logs.go:282] 0 containers: []
	W1217 01:40:55.324061 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:55.324067 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:55.324158 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:55.352729 1408373 cri.go:89] found id: ""
	I1217 01:40:55.352755 1408373 logs.go:282] 0 containers: []
	W1217 01:40:55.352764 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:55.352777 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:55.352790 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:55.382546 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:55.382646 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:55.440949 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:55.440984 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:55.456141 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:55.456169 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:55.520375 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:40:55.520398 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:55.520418 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:55.550735 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:55.550768 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:55.578767 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:55.578796 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:55.622436 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:55.622467 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:55.654341 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:55.654373 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:58.189785 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:40:58.200030 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:40:58.200100 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:40:58.224889 1408373 cri.go:89] found id: "5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:58.224908 1408373 cri.go:89] found id: ""
	I1217 01:40:58.224916 1408373 logs.go:282] 1 containers: [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5]
	I1217 01:40:58.224973 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:58.228799 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:40:58.228903 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:40:58.258991 1408373 cri.go:89] found id: "aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:58.259013 1408373 cri.go:89] found id: ""
	I1217 01:40:58.259021 1408373 logs.go:282] 1 containers: [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6]
	I1217 01:40:58.259078 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:58.262980 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:40:58.263054 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:40:58.288608 1408373 cri.go:89] found id: ""
	I1217 01:40:58.288631 1408373 logs.go:282] 0 containers: []
	W1217 01:40:58.288646 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:40:58.288652 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:40:58.288711 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:40:58.314254 1408373 cri.go:89] found id: "f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:58.314278 1408373 cri.go:89] found id: ""
	I1217 01:40:58.314286 1408373 logs.go:282] 1 containers: [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068]
	I1217 01:40:58.314364 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:58.318258 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:40:58.318334 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:40:58.344106 1408373 cri.go:89] found id: ""
	I1217 01:40:58.344135 1408373 logs.go:282] 0 containers: []
	W1217 01:40:58.344144 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:40:58.344151 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:40:58.344212 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:40:58.370634 1408373 cri.go:89] found id: "f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:58.370659 1408373 cri.go:89] found id: ""
	I1217 01:40:58.370668 1408373 logs.go:282] 1 containers: [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd]
	I1217 01:40:58.370734 1408373 ssh_runner.go:195] Run: which crictl
	I1217 01:40:58.374763 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:40:58.374836 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:40:58.409724 1408373 cri.go:89] found id: ""
	I1217 01:40:58.409774 1408373 logs.go:282] 0 containers: []
	W1217 01:40:58.409784 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:40:58.409790 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:40:58.409857 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:40:58.434983 1408373 cri.go:89] found id: ""
	I1217 01:40:58.435007 1408373 logs.go:282] 0 containers: []
	W1217 01:40:58.435015 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:40:58.435030 1408373 logs.go:123] Gathering logs for kube-apiserver [5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5] ...
	I1217 01:40:58.435068 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5"
	I1217 01:40:58.477277 1408373 logs.go:123] Gathering logs for etcd [aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6] ...
	I1217 01:40:58.477306 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6"
	I1217 01:40:58.515532 1408373 logs.go:123] Gathering logs for kube-controller-manager [f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd] ...
	I1217 01:40:58.515564 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd"
	I1217 01:40:58.545235 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:40:58.545267 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:40:58.574350 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:40:58.574381 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:40:58.632518 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:40:58.632554 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:40:58.647374 1408373 logs.go:123] Gathering logs for kube-scheduler [f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068] ...
	I1217 01:40:58.647404 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068"
	I1217 01:40:58.680356 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:40:58.680384 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:40:58.712327 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:40:58.712360 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:40:58.784240 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:41:01.284526 1408373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:01.295323 1408373 kubeadm.go:602] duration metric: took 4m2.98372819s to restartPrimaryControlPlane
	W1217 01:41:01.295395 1408373 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 01:41:01.295463 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:41:01.766808 1408373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:41:01.780273 1408373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:41:01.788235 1408373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:41:01.788320 1408373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:41:01.796309 1408373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:41:01.796331 1408373 kubeadm.go:158] found existing configuration files:
	
	I1217 01:41:01.796382 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:41:01.804509 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:41:01.804604 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:41:01.812496 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:41:01.820504 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:41:01.820618 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:41:01.828704 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:41:01.836565 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:41:01.836676 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:41:01.844297 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:41:01.852785 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:41:01.852854 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:41:01.860890 1408373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:41:01.902969 1408373 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:41:01.903273 1408373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:41:01.987487 1408373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:41:01.987563 1408373 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:41:01.987600 1408373 kubeadm.go:319] OS: Linux
	I1217 01:41:01.987661 1408373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:41:01.987710 1408373 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:41:01.987759 1408373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:41:01.987808 1408373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:41:01.987863 1408373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:41:01.987912 1408373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:41:01.987958 1408373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:41:01.988006 1408373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:41:01.988052 1408373 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:41:02.082312 1408373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:41:02.082419 1408373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:41:02.082508 1408373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:41:12.234898 1408373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:41:12.238016 1408373 out.go:252]   - Generating certificates and keys ...
	I1217 01:41:12.238131 1408373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:41:12.238203 1408373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:41:12.238311 1408373 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:41:12.238384 1408373 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:41:12.238453 1408373 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:41:12.238505 1408373 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:41:12.238588 1408373 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:41:12.238663 1408373 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:41:12.238743 1408373 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:41:12.239006 1408373 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:41:12.239343 1408373 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:41:12.239407 1408373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:41:12.352709 1408373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:41:12.510367 1408373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:41:13.030076 1408373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:41:13.252576 1408373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:41:13.463631 1408373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:41:13.464472 1408373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:41:13.467214 1408373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:41:13.472883 1408373 out.go:252]   - Booting up control plane ...
	I1217 01:41:13.472996 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:41:13.473106 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:41:13.473200 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:41:13.496111 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:41:13.496218 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:41:13.507506 1408373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:41:13.508468 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:41:13.508522 1408373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:41:13.694836 1408373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:41:13.694951 1408373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:45:13.695309 1408373 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000840976s
	I1217 01:45:13.695348 1408373 kubeadm.go:319] 
	I1217 01:45:13.695407 1408373 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:45:13.695443 1408373 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:45:13.695561 1408373 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:45:13.695584 1408373 kubeadm.go:319] 
	I1217 01:45:13.695706 1408373 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:45:13.695743 1408373 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:45:13.695777 1408373 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:45:13.695785 1408373 kubeadm.go:319] 
	I1217 01:45:13.700486 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:45:13.700928 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:45:13.701051 1408373 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:45:13.701312 1408373 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:45:13.701325 1408373 kubeadm.go:319] 
	I1217 01:45:13.701426 1408373 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:45:13.701501 1408373 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000840976s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000840976s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:45:13.701582 1408373 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:45:14.110630 1408373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:45:14.125935 1408373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:45:14.126002 1408373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:45:14.135839 1408373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:45:14.135860 1408373 kubeadm.go:158] found existing configuration files:
	
	I1217 01:45:14.135913 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:45:14.145575 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:45:14.145635 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:45:14.154713 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:45:14.175019 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:45:14.175086 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:45:14.186828 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:45:14.195778 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:45:14.195842 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:45:14.204365 1408373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:45:14.213955 1408373 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:45:14.214017 1408373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:45:14.224673 1408373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:45:14.275995 1408373 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:45:14.276219 1408373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:45:14.369220 1408373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:45:14.369288 1408373 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:45:14.369322 1408373 kubeadm.go:319] OS: Linux
	I1217 01:45:14.369365 1408373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:45:14.369410 1408373 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:45:14.369455 1408373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:45:14.369500 1408373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:45:14.369545 1408373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:45:14.369595 1408373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:45:14.369638 1408373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:45:14.369703 1408373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:45:14.369747 1408373 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:45:14.477173 1408373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:45:14.477284 1408373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:45:14.477376 1408373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:45:14.484723 1408373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:45:14.489982 1408373 out.go:252]   - Generating certificates and keys ...
	I1217 01:45:14.490077 1408373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:45:14.490142 1408373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:45:14.490227 1408373 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:45:14.490287 1408373 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:45:14.490353 1408373 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:45:14.490404 1408373 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:45:14.490463 1408373 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:45:14.490521 1408373 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:45:14.490592 1408373 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:45:14.490660 1408373 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:45:14.490696 1408373 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:45:14.490749 1408373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:45:15.257183 1408373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:45:15.992050 1408373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:45:16.343794 1408373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:45:16.606428 1408373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:45:16.807938 1408373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:45:16.808488 1408373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:45:16.812140 1408373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:45:16.815378 1408373 out.go:252]   - Booting up control plane ...
	I1217 01:45:16.815496 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:45:16.815582 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:45:16.815686 1408373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:45:16.838105 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:45:16.838457 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:45:16.845972 1408373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:45:16.846824 1408373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:45:16.847065 1408373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:45:17.009876 1408373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:45:17.009993 1408373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:49:17.009625 1408373 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000128666s
	I1217 01:49:17.009695 1408373 kubeadm.go:319] 
	I1217 01:49:17.009752 1408373 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:49:17.009787 1408373 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:49:17.009894 1408373 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:49:17.009909 1408373 kubeadm.go:319] 
	I1217 01:49:17.010020 1408373 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:49:17.010088 1408373 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:49:17.010135 1408373 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:49:17.010141 1408373 kubeadm.go:319] 
	I1217 01:49:17.014472 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:49:17.014874 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:49:17.014978 1408373 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:49:17.015238 1408373 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 01:49:17.015245 1408373 kubeadm.go:319] 
	I1217 01:49:17.015319 1408373 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:49:17.015421 1408373 kubeadm.go:403] duration metric: took 12m18.764483931s to StartCluster
	I1217 01:49:17.015466 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:49:17.015526 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:49:17.048222 1408373 cri.go:89] found id: ""
	I1217 01:49:17.048243 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.048252 1408373 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:49:17.048258 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:49:17.048331 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:49:17.077952 1408373 cri.go:89] found id: ""
	I1217 01:49:17.077974 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.077987 1408373 logs.go:284] No container was found matching "etcd"
	I1217 01:49:17.077994 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:49:17.078057 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:49:17.144981 1408373 cri.go:89] found id: ""
	I1217 01:49:17.145003 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.145012 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:49:17.145018 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:49:17.145078 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:49:17.212950 1408373 cri.go:89] found id: ""
	I1217 01:49:17.212971 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.212980 1408373 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:49:17.212987 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:49:17.213046 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:49:17.258208 1408373 cri.go:89] found id: ""
	I1217 01:49:17.258229 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.258238 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:49:17.258244 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:49:17.258302 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:49:17.290014 1408373 cri.go:89] found id: ""
	I1217 01:49:17.290036 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.290045 1408373 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:49:17.290051 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:49:17.290110 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:49:17.325008 1408373 cri.go:89] found id: ""
	I1217 01:49:17.325032 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.325041 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:49:17.325048 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:49:17.325109 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:49:17.364689 1408373 cri.go:89] found id: ""
	I1217 01:49:17.364711 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.364720 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:49:17.364730 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:49:17.364745 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:49:17.391359 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:49:17.391387 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:49:17.477285 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:49:17.477358 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:49:17.477386 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:49:17.531539 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:49:17.531612 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:49:17.568074 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:49:17.568098 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 01:49:17.639520 1408373 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:49:17.639582 1408373 out.go:285] * 
	* 
	W1217 01:49:17.639636 1408373 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:49:17.639654 1408373 out.go:285] * 
	* 
	W1217 01:49:17.643440 1408373 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:49:17.650362 1408373 out.go:203] 
	W1217 01:49:17.654350 1408373 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:49:17.654525 1408373 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:49:17.654553 1408373 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:49:17.658249 1408373 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-916713 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-916713 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-916713 version --output=json: exit status 1 (168.661806ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-17 01:49:18.802416183 +0000 UTC m=+4911.177824680
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-916713
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-916713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061",
	        "Created": "2025-12-17T01:36:12.701171939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1408505,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:36:45.423138684Z",
	            "FinishedAt": "2025-12-17T01:36:44.332004866Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061/hosts",
	        "LogPath": "/var/lib/docker/containers/9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061/9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061-json.log",
	        "Name": "/kubernetes-upgrade-916713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-916713:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-916713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e5f55b18de459a71e97a1ee04003ccdc74dc15bd9923bc0709cb9bc03547061",
	                "LowerDir": "/var/lib/docker/overlay2/e6c4451dea365aa7dd2073d5baa24988b4d20db68c3f514aa5436e076410b516-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6c4451dea365aa7dd2073d5baa24988b4d20db68c3f514aa5436e076410b516/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6c4451dea365aa7dd2073d5baa24988b4d20db68c3f514aa5436e076410b516/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6c4451dea365aa7dd2073d5baa24988b4d20db68c3f514aa5436e076410b516/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-916713",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-916713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-916713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-916713",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-916713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9b2a182a6ac97091512e816af192d15f8ddabb2464f51c4830986d271df58cf",
	            "SandboxKey": "/var/run/docker/netns/b9b2a182a6ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34171"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-916713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:11:f1:22:32:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a4c8e19576d6241878a8075d14a3ebdaeec86de57d96065df6ca4034b47494b",
	                    "EndpointID": "d9417b3c277c03214ed437b7019aa075df63dd2ede5a2a9f79cf1c97e0d1bddc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-916713",
	                        "9e5f55b18de4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-916713 -n kubernetes-upgrade-916713
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-916713 -n kubernetes-upgrade-916713: exit status 2 (411.861571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-916713 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-916713 logs -n 25: (1.028756039s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-721629 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl status docker --all --full --no-pager                                            │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl cat docker --no-pager                                                            │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cat /etc/docker/daemon.json                                                                │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo docker system info                                                                         │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cri-dockerd --version                                                                      │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl cat containerd --no-pager                                                        │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo cat /etc/containerd/config.toml                                                            │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo containerd config dump                                                                     │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl status crio --all --full --no-pager                                              │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo systemctl cat crio --no-pager                                                              │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ ssh     │ -p cilium-721629 sudo crio config                                                                                │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │                     │
	│ delete  │ -p cilium-721629                                                                                                 │ cilium-721629            │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │ 17 Dec 25 01:45 UTC │
	│ start   │ -p force-systemd-env-113128 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-113128 │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │ 17 Dec 25 01:45 UTC │
	│ ssh     │ force-systemd-env-113128 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-113128 │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │ 17 Dec 25 01:45 UTC │
	│ delete  │ -p force-systemd-env-113128                                                                                      │ force-systemd-env-113128 │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │ 17 Dec 25 01:45 UTC │
	│ start   │ -p cert-expiration-741064 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-741064   │ jenkins │ v1.37.0 │ 17 Dec 25 01:45 UTC │ 17 Dec 25 01:46 UTC │
	│ start   │ -p cert-expiration-741064 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd  │ cert-expiration-741064   │ jenkins │ v1.37.0 │ 17 Dec 25 01:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:49:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:49:15.091480 1453187 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:49:15.091628 1453187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:49:15.091633 1453187 out.go:374] Setting ErrFile to fd 2...
	I1217 01:49:15.091637 1453187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:49:15.091993 1453187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:49:15.092552 1453187 out.go:368] Setting JSON to false
	I1217 01:49:15.093839 1453187 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27106,"bootTime":1765909050,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:49:15.093908 1453187 start.go:143] virtualization:  
	I1217 01:49:15.097495 1453187 out.go:179] * [cert-expiration-741064] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:49:15.100688 1453187 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:49:15.100749 1453187 notify.go:221] Checking for updates...
	I1217 01:49:15.104680 1453187 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:49:15.107657 1453187 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:49:15.110740 1453187 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:49:15.113702 1453187 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:49:15.116681 1453187 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:49:15.120251 1453187 config.go:182] Loaded profile config "cert-expiration-741064": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:49:15.120921 1453187 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:49:15.161606 1453187 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:49:15.161775 1453187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:49:15.228862 1453187 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 01:49:15.210677706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:49:15.228957 1453187 docker.go:319] overlay module found
	I1217 01:49:15.232066 1453187 out.go:179] * Using the docker driver based on existing profile
	I1217 01:49:15.234995 1453187 start.go:309] selected driver: docker
	I1217 01:49:15.235006 1453187 start.go:927] validating driver "docker" against &{Name:cert-expiration-741064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-741064 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:49:15.235107 1453187 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:49:15.235884 1453187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:49:15.292209 1453187 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 01:49:15.281608625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:49:15.292581 1453187 cni.go:84] Creating CNI manager for ""
	I1217 01:49:15.292639 1453187 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:49:15.292684 1453187 start.go:353] cluster config:
	{Name:cert-expiration-741064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-741064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:49:15.297340 1453187 out.go:179] * Starting "cert-expiration-741064" primary control-plane node in "cert-expiration-741064" cluster
	I1217 01:49:15.300205 1453187 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:49:15.303946 1453187 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:49:15.307030 1453187 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 01:49:15.307070 1453187 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1217 01:49:15.307078 1453187 cache.go:65] Caching tarball of preloaded images
	I1217 01:49:15.307095 1453187 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:49:15.307178 1453187 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:49:15.307187 1453187 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1217 01:49:15.307290 1453187 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/cert-expiration-741064/config.json ...
	I1217 01:49:15.329383 1453187 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:49:15.329395 1453187 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:49:15.329408 1453187 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:49:15.329440 1453187 start.go:360] acquireMachinesLock for cert-expiration-741064: {Name:mk1188d042b30df8efbf38240fdacf69bb90594a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:49:15.329493 1453187 start.go:364] duration metric: took 37.982µs to acquireMachinesLock for "cert-expiration-741064"
	I1217 01:49:15.329512 1453187 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:49:15.329516 1453187 fix.go:54] fixHost starting: 
	I1217 01:49:15.329819 1453187 cli_runner.go:164] Run: docker container inspect cert-expiration-741064 --format={{.State.Status}}
	I1217 01:49:15.349072 1453187 fix.go:112] recreateIfNeeded on cert-expiration-741064: state=Running err=<nil>
	W1217 01:49:15.349091 1453187 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:49:17.009625 1408373 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000128666s
	I1217 01:49:17.009695 1408373 kubeadm.go:319] 
	I1217 01:49:17.009752 1408373 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:49:17.009787 1408373 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:49:17.009894 1408373 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:49:17.009909 1408373 kubeadm.go:319] 
	I1217 01:49:17.010020 1408373 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:49:17.010088 1408373 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:49:17.010135 1408373 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:49:17.010141 1408373 kubeadm.go:319] 
	I1217 01:49:17.014472 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:49:17.014874 1408373 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:49:17.014978 1408373 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:49:17.015238 1408373 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 01:49:17.015245 1408373 kubeadm.go:319] 
	I1217 01:49:17.015319 1408373 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:49:17.015421 1408373 kubeadm.go:403] duration metric: took 12m18.764483931s to StartCluster
	I1217 01:49:17.015466 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:49:17.015526 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:49:17.048222 1408373 cri.go:89] found id: ""
	I1217 01:49:17.048243 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.048252 1408373 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:49:17.048258 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 01:49:17.048331 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:49:17.077952 1408373 cri.go:89] found id: ""
	I1217 01:49:17.077974 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.077987 1408373 logs.go:284] No container was found matching "etcd"
	I1217 01:49:17.077994 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 01:49:17.078057 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:49:17.144981 1408373 cri.go:89] found id: ""
	I1217 01:49:17.145003 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.145012 1408373 logs.go:284] No container was found matching "coredns"
	I1217 01:49:17.145018 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:49:17.145078 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:49:17.212950 1408373 cri.go:89] found id: ""
	I1217 01:49:17.212971 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.212980 1408373 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:49:17.212987 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:49:17.213046 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:49:17.258208 1408373 cri.go:89] found id: ""
	I1217 01:49:17.258229 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.258238 1408373 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:49:17.258244 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:49:17.258302 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:49:17.290014 1408373 cri.go:89] found id: ""
	I1217 01:49:17.290036 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.290045 1408373 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:49:17.290051 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 01:49:17.290110 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:49:17.325008 1408373 cri.go:89] found id: ""
	I1217 01:49:17.325032 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.325041 1408373 logs.go:284] No container was found matching "kindnet"
	I1217 01:49:17.325048 1408373 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:49:17.325109 1408373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:49:17.364689 1408373 cri.go:89] found id: ""
	I1217 01:49:17.364711 1408373 logs.go:282] 0 containers: []
	W1217 01:49:17.364720 1408373 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:49:17.364730 1408373 logs.go:123] Gathering logs for dmesg ...
	I1217 01:49:17.364745 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:49:17.391359 1408373 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:49:17.391387 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:49:17.477285 1408373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:49:17.477358 1408373 logs.go:123] Gathering logs for containerd ...
	I1217 01:49:17.477386 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 01:49:17.531539 1408373 logs.go:123] Gathering logs for container status ...
	I1217 01:49:17.531612 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:49:17.568074 1408373 logs.go:123] Gathering logs for kubelet ...
	I1217 01:49:17.568098 1408373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 01:49:17.639520 1408373 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:49:17.639582 1408373 out.go:285] * 
	W1217 01:49:17.639636 1408373 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:49:17.639654 1408373 out.go:285] * 
	W1217 01:49:17.643440 1408373 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:49:17.650362 1408373 out.go:203] 
	W1217 01:49:17.654350 1408373 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128666s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:49:17.654525 1408373 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:49:17.654553 1408373 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:49:17.658249 1408373 out.go:203] 
	I1217 01:49:15.352231 1453187 out.go:252] * Updating the running docker "cert-expiration-741064" container ...
	I1217 01:49:15.352259 1453187 machine.go:94] provisionDockerMachine start ...
	I1217 01:49:15.352354 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:15.377531 1453187 main.go:143] libmachine: Using SSH client type: native
	I1217 01:49:15.377896 1453187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34204 <nil> <nil>}
	I1217 01:49:15.377903 1453187 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:49:15.513509 1453187 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-741064
	
	I1217 01:49:15.513523 1453187 ubuntu.go:182] provisioning hostname "cert-expiration-741064"
	I1217 01:49:15.513610 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:15.533452 1453187 main.go:143] libmachine: Using SSH client type: native
	I1217 01:49:15.533857 1453187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34204 <nil> <nil>}
	I1217 01:49:15.533866 1453187 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-741064 && echo "cert-expiration-741064" | sudo tee /etc/hostname
	I1217 01:49:15.676223 1453187 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-741064
	
	I1217 01:49:15.676303 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:15.698832 1453187 main.go:143] libmachine: Using SSH client type: native
	I1217 01:49:15.699126 1453187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34204 <nil> <nil>}
	I1217 01:49:15.699140 1453187 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-741064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-741064/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-741064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:49:15.830026 1453187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:49:15.830042 1453187 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:49:15.830073 1453187 ubuntu.go:190] setting up certificates
	I1217 01:49:15.830090 1453187 provision.go:84] configureAuth start
	I1217 01:49:15.830150 1453187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-741064
	I1217 01:49:15.849101 1453187 provision.go:143] copyHostCerts
	I1217 01:49:15.849176 1453187 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:49:15.849185 1453187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:49:15.849261 1453187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:49:15.849365 1453187 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:49:15.849369 1453187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:49:15.849399 1453187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:49:15.849451 1453187 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:49:15.849454 1453187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:49:15.849477 1453187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:49:15.849522 1453187 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-741064 san=[127.0.0.1 192.168.85.2 cert-expiration-741064 localhost minikube]
	I1217 01:49:15.957589 1453187 provision.go:177] copyRemoteCerts
	I1217 01:49:15.957672 1453187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:49:15.957713 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:15.975131 1453187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34204 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/cert-expiration-741064/id_rsa Username:docker}
	I1217 01:49:16.075595 1453187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:49:16.095311 1453187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 01:49:16.114656 1453187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:49:16.134021 1453187 provision.go:87] duration metric: took 303.919325ms to configureAuth
	I1217 01:49:16.134040 1453187 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:49:16.134232 1453187 config.go:182] Loaded profile config "cert-expiration-741064": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:49:16.134237 1453187 machine.go:97] duration metric: took 781.97324ms to provisionDockerMachine
	I1217 01:49:16.134244 1453187 start.go:293] postStartSetup for "cert-expiration-741064" (driver="docker")
	I1217 01:49:16.134252 1453187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:49:16.134299 1453187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:49:16.134338 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:16.151974 1453187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34204 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/cert-expiration-741064/id_rsa Username:docker}
	I1217 01:49:16.245976 1453187 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:49:16.249380 1453187 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:49:16.249410 1453187 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:49:16.249421 1453187 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:49:16.249475 1453187 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:49:16.249550 1453187 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:49:16.249684 1453187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:49:16.257701 1453187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:49:16.276283 1453187 start.go:296] duration metric: took 142.019492ms for postStartSetup
	I1217 01:49:16.276401 1453187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:49:16.276457 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:16.293824 1453187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34204 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/cert-expiration-741064/id_rsa Username:docker}
	I1217 01:49:16.391912 1453187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:49:16.398648 1453187 fix.go:56] duration metric: took 1.069125055s for fixHost
	I1217 01:49:16.398673 1453187 start.go:83] releasing machines lock for "cert-expiration-741064", held for 1.069164645s
	I1217 01:49:16.398746 1453187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-741064
	I1217 01:49:16.421833 1453187 ssh_runner.go:195] Run: cat /version.json
	I1217 01:49:16.421875 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:16.421877 1453187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:49:16.421933 1453187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-741064
	I1217 01:49:16.445275 1453187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34204 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/cert-expiration-741064/id_rsa Username:docker}
	I1217 01:49:16.454509 1453187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34204 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/cert-expiration-741064/id_rsa Username:docker}
	I1217 01:49:16.545925 1453187 ssh_runner.go:195] Run: systemctl --version
	I1217 01:49:16.639651 1453187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:49:16.644038 1453187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:49:16.644099 1453187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:49:16.652976 1453187 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:49:16.652991 1453187 start.go:496] detecting cgroup driver to use...
	I1217 01:49:16.653020 1453187 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:49:16.653078 1453187 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:49:16.669701 1453187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:49:16.683922 1453187 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:49:16.683986 1453187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:49:16.700462 1453187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:49:16.713935 1453187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:49:16.875828 1453187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:49:17.029389 1453187 docker.go:234] disabling docker service ...
	I1217 01:49:17.029458 1453187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:49:17.045799 1453187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:49:17.059283 1453187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:49:17.263552 1453187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:49:17.486758 1453187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:49:17.505300 1453187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:49:17.526449 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:49:17.538558 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:49:17.549014 1453187 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:49:17.549076 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:49:17.559768 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:49:17.576397 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:49:17.595216 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:49:17.606797 1453187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:49:17.615922 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:49:17.624496 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:49:17.633129 1453187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:49:17.642460 1453187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:49:17.651617 1453187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:49:17.660229 1453187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:49:17.886103 1453187 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:49:18.370064 1453187 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:49:18.370125 1453187 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:49:18.375195 1453187 start.go:564] Will wait 60s for crictl version
	I1217 01:49:18.375249 1453187 ssh_runner.go:195] Run: which crictl
	I1217 01:49:18.380806 1453187 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:49:18.428477 1453187 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:49:18.428537 1453187 ssh_runner.go:195] Run: containerd --version
	I1217 01:49:18.458849 1453187 ssh_runner.go:195] Run: containerd --version
	I1217 01:49:18.486285 1453187 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:41:09 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:09.821328680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:41:09 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:09.822805029Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.380341663s"
	Dec 17 01:41:09 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:09.822850806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 17 01:41:09 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:09.824315831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.526705037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.530909269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.533475616Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.538485051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.540170559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 715.711465ms"
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.540229465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 17 01:41:10 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:10.541112724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.225515472Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.225551001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.227505690Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.232584188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.233889360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.69273405s"
	Dec 17 01:41:12 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:41:12.234042042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.706614544Z" level=info msg="container event discarded" container=f98587294970577e2335f035851e16e0c4d24debe476079345b62c3077a6e068 type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.720871782Z" level=info msg="container event discarded" container=2193ce742677002144dca7ca311dfb35605d19dac7816b15cbd1f97a3495b799 type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.733153308Z" level=info msg="container event discarded" container=aedf4e405c5a2f95bff60f27552c6d94c005368e24b6510f049c2e46a54124b6 type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.733213830Z" level=info msg="container event discarded" container=15697f0708254262b956bb3c77a18c831808ea02b60d2e5f687f9ac506b38e09 type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.749458940Z" level=info msg="container event discarded" container=f6703bb23f3c15b48bb7eda59598daee9f26e93963117fb2adc6e7e8576042dd type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.749519741Z" level=info msg="container event discarded" container=1648ed5423d5098e14c1fe3007020545b7a1212762f8ac1ba48603fc5f6d841c type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.765704158Z" level=info msg="container event discarded" container=5b90a5ba4927d729d65dca32715d80cd416a1d9278cd892c678c5fdc5648d2a5 type=CONTAINER_DELETED_EVENT
	Dec 17 01:46:01 kubernetes-upgrade-916713 containerd[557]: time="2025-12-17T01:46:01.765765081Z" level=info msg="container event discarded" container=81f1e1fe36420168382e4bfc4473fb9bad0a7b1c116a4180bf7944c342dfd0f3 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:49:20 up  7:31,  0 user,  load average: 0.58, 1.32, 1.80
	Linux kubernetes-upgrade-916713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:49:16 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:17 kubernetes-upgrade-916713 kubelet[14347]: E1217 01:49:17.221239   14347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:17 kubernetes-upgrade-916713 kubelet[14416]: E1217 01:49:17.971715   14416 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:49:17 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:49:18 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 01:49:18 kubernetes-upgrade-916713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:18 kubernetes-upgrade-916713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:18 kubernetes-upgrade-916713 kubelet[14421]: E1217 01:49:18.758042   14421 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:49:18 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:49:18 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:49:19 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 01:49:19 kubernetes-upgrade-916713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:19 kubernetes-upgrade-916713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:49:19 kubernetes-upgrade-916713 kubelet[14455]: E1217 01:49:19.741374   14455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:49:19 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:49:19 kubernetes-upgrade-916713 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-916713 -n kubernetes-upgrade-916713
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-916713 -n kubernetes-upgrade-916713: exit status 2 (461.6006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-916713" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-916713" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-916713
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-916713: (2.527851794s)
--- FAIL: TestKubernetesUpgrade (796.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (509.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m27.434565829s)

                                                
                                                
-- stdout --
	* [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:53:09.671439 1475658 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:53:09.671611 1475658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:09.671642 1475658 out.go:374] Setting ErrFile to fd 2...
	I1217 01:53:09.671662 1475658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:09.671964 1475658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:53:09.672433 1475658 out.go:368] Setting JSON to false
	I1217 01:53:09.673442 1475658 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27340,"bootTime":1765909050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:53:09.673543 1475658 start.go:143] virtualization:  
	I1217 01:53:09.677520 1475658 out.go:179] * [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:53:09.681609 1475658 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:53:09.681757 1475658 notify.go:221] Checking for updates...
	I1217 01:53:09.687838 1475658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:53:09.690941 1475658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:53:09.694127 1475658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:53:09.697196 1475658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:53:09.700235 1475658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:53:09.703845 1475658 config.go:182] Loaded profile config "embed-certs-608379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:53:09.704003 1475658 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:53:09.729754 1475658 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:53:09.729923 1475658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:09.791131 1475658 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:53:09.782001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:53:09.791258 1475658 docker.go:319] overlay module found
	I1217 01:53:09.794553 1475658 out.go:179] * Using the docker driver based on user configuration
	I1217 01:53:09.797416 1475658 start.go:309] selected driver: docker
	I1217 01:53:09.797443 1475658 start.go:927] validating driver "docker" against <nil>
	I1217 01:53:09.797457 1475658 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:53:09.798243 1475658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:09.853928 1475658 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:53:09.844237041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:53:09.854083 1475658 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 01:53:09.854314 1475658 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:53:09.857275 1475658 out.go:179] * Using Docker driver with root privileges
	I1217 01:53:09.860109 1475658 cni.go:84] Creating CNI manager for ""
	I1217 01:53:09.860180 1475658 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:53:09.860193 1475658 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:53:09.860284 1475658 start.go:353] cluster config:
	{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:53:09.865553 1475658 out.go:179] * Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	I1217 01:53:09.868403 1475658 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:53:09.871285 1475658 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:53:09.874178 1475658 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:53:09.874260 1475658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:53:09.874333 1475658 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 01:53:09.874363 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json: {Name:mk95cb625ddde68fa4a48c7247a1995ba638c8c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:09.874612 1475658 cache.go:107] acquiring lock: {Name:mk4890d4b47ae1973de2f5e1f0682feb41ee40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.874670 1475658 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 01:53:09.874689 1475658 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.72µs
	I1217 01:53:09.874705 1475658 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 01:53:09.874715 1475658 cache.go:107] acquiring lock: {Name:mk966096fd85af29d80d70ba567f975fd1c8ab20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.874868 1475658 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:09.875268 1475658 cache.go:107] acquiring lock: {Name:mkf4d095c495df29849f640a0755588b041f7643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.875385 1475658 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:09.875607 1475658 cache.go:107] acquiring lock: {Name:mk1c22383e6094d20d836c3a904bbbe609668a02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.875772 1475658 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:09.876062 1475658 cache.go:107] acquiring lock: {Name:mkc3683c3186a723f5651545e5f013a6bc8b78e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.876164 1475658 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:09.876837 1475658 cache.go:107] acquiring lock: {Name:mk3a7027108fb6cda418f0aea932fdb404491198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.876921 1475658 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 01:53:09.876937 1475658 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 107.8µs
	I1217 01:53:09.876945 1475658 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 01:53:09.876962 1475658 cache.go:107] acquiring lock: {Name:mkbcf0cf66af7f52acaeaf88186edd5961eb7fb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.876995 1475658 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1217 01:53:09.877003 1475658 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.832µs
	I1217 01:53:09.877010 1475658 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1217 01:53:09.877086 1475658 cache.go:107] acquiring lock: {Name:mk85e5e85708e9527e64bdd95012aff390add343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.877185 1475658 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:09.878195 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:09.878612 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:09.878751 1475658 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:09.879199 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:09.880724 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:09.900352 1475658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:53:09.900378 1475658 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:53:09.900394 1475658 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:53:09.900425 1475658 start.go:360] acquireMachinesLock for no-preload-178365: {Name:mkd4a1763d090ac24f95097d34ac035f597ec2f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:09.900545 1475658 start.go:364] duration metric: took 99.226µs to acquireMachinesLock for "no-preload-178365"
	I1217 01:53:09.900576 1475658 start.go:93] Provisioning new machine with config: &{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:53:09.900646 1475658 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:53:09.904248 1475658 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:53:09.904503 1475658 start.go:159] libmachine.API.Create for "no-preload-178365" (driver="docker")
	I1217 01:53:09.904535 1475658 client.go:173] LocalClient.Create starting
	I1217 01:53:09.904601 1475658 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:53:09.904642 1475658 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:09.904662 1475658 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:09.904718 1475658 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:53:09.904739 1475658 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:09.904759 1475658 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:09.905125 1475658 cli_runner.go:164] Run: docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:53:09.928056 1475658 cli_runner.go:211] docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:53:09.928138 1475658 network_create.go:284] running [docker network inspect no-preload-178365] to gather additional debugging logs...
	I1217 01:53:09.928161 1475658 cli_runner.go:164] Run: docker network inspect no-preload-178365
	W1217 01:53:09.947124 1475658 cli_runner.go:211] docker network inspect no-preload-178365 returned with exit code 1
	I1217 01:53:09.947158 1475658 network_create.go:287] error running [docker network inspect no-preload-178365]: docker network inspect no-preload-178365: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-178365 not found
	I1217 01:53:09.947173 1475658 network_create.go:289] output of [docker network inspect no-preload-178365]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-178365 not found
	
	** /stderr **
	I1217 01:53:09.947272 1475658 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:53:09.963729 1475658 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:53:09.965469 1475658 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:53:09.965906 1475658 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:53:09.966353 1475658 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b8e7f0}
	I1217 01:53:09.966375 1475658 network_create.go:124] attempt to create docker network no-preload-178365 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:53:09.966430 1475658 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-178365 no-preload-178365
	I1217 01:53:10.045283 1475658 network_create.go:108] docker network no-preload-178365 192.168.76.0/24 created
	I1217 01:53:10.045320 1475658 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-178365" container
	I1217 01:53:10.045424 1475658 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:53:10.064154 1475658 cli_runner.go:164] Run: docker volume create no-preload-178365 --label name.minikube.sigs.k8s.io=no-preload-178365 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:53:10.085773 1475658 oci.go:103] Successfully created a docker volume no-preload-178365
	I1217 01:53:10.085877 1475658 cli_runner.go:164] Run: docker run --rm --name no-preload-178365-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-178365 --entrypoint /usr/bin/test -v no-preload-178365:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:53:10.228443 1475658 cache.go:162] opening:  /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1217 01:53:10.244869 1475658 cache.go:162] opening:  /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1217 01:53:10.253069 1475658 cache.go:162] opening:  /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 01:53:10.264218 1475658 cache.go:162] opening:  /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:10.311747 1475658 cache.go:162] opening:  /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1217 01:53:10.734475 1475658 oci.go:107] Successfully prepared a docker volume no-preload-178365
	I1217 01:53:10.734535 1475658 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1217 01:53:10.734659 1475658 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:53:10.734766 1475658 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:53:10.832836 1475658 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-178365 --name no-preload-178365 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-178365 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-178365 --network no-preload-178365 --ip 192.168.76.2 --volume no-preload-178365:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:53:10.866261 1475658 cache.go:157] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1217 01:53:10.866338 1475658 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 990.279994ms
	I1217 01:53:10.866367 1475658 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1217 01:53:11.160969 1475658 cache.go:157] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1217 01:53:11.161062 1475658 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.285457962s
	I1217 01:53:11.161093 1475658 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 01:53:11.278806 1475658 cache.go:157] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 01:53:11.278838 1475658 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.401752939s
	I1217 01:53:11.278850 1475658 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 01:53:11.303268 1475658 cache.go:157] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1217 01:53:11.303389 1475658 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.428584179s
	I1217 01:53:11.303411 1475658 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 01:53:11.317077 1475658 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Running}}
	I1217 01:53:11.336889 1475658 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 01:53:11.362663 1475658 cli_runner.go:164] Run: docker exec no-preload-178365 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:53:11.396514 1475658 cache.go:157] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1217 01:53:11.396542 1475658 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.521280409s
	I1217 01:53:11.396555 1475658 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 01:53:11.396580 1475658 cache.go:87] Successfully saved all images to host disk.
	I1217 01:53:11.434605 1475658 oci.go:144] the created container "no-preload-178365" has a running status.
	I1217 01:53:11.434634 1475658 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa...
	I1217 01:53:11.703739 1475658 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:53:11.724116 1475658 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 01:53:11.747711 1475658 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:53:11.747737 1475658 kic_runner.go:114] Args: [docker exec --privileged no-preload-178365 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:53:11.806851 1475658 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 01:53:11.834572 1475658 machine.go:94] provisionDockerMachine start ...
	I1217 01:53:11.834662 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:11.860179 1475658 main.go:143] libmachine: Using SSH client type: native
	I1217 01:53:11.860547 1475658 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34239 <nil> <nil>}
	I1217 01:53:11.860557 1475658 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:53:11.861247 1475658 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:53:14.993377 1475658 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 01:53:14.993402 1475658 ubuntu.go:182] provisioning hostname "no-preload-178365"
	I1217 01:53:14.993495 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.039600 1475658 main.go:143] libmachine: Using SSH client type: native
	I1217 01:53:15.039919 1475658 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34239 <nil> <nil>}
	I1217 01:53:15.039935 1475658 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-178365 && echo "no-preload-178365" | sudo tee /etc/hostname
	I1217 01:53:15.191762 1475658 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 01:53:15.191849 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.210451 1475658 main.go:143] libmachine: Using SSH client type: native
	I1217 01:53:15.210784 1475658 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34239 <nil> <nil>}
	I1217 01:53:15.210805 1475658 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178365/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:53:15.345931 1475658 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:53:15.346002 1475658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:53:15.346047 1475658 ubuntu.go:190] setting up certificates
	I1217 01:53:15.346097 1475658 provision.go:84] configureAuth start
	I1217 01:53:15.346185 1475658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 01:53:15.364655 1475658 provision.go:143] copyHostCerts
	I1217 01:53:15.364718 1475658 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:53:15.364727 1475658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:53:15.364807 1475658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:53:15.364905 1475658 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:53:15.364910 1475658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:53:15.364937 1475658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:53:15.364985 1475658 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:53:15.364990 1475658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:53:15.365012 1475658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:53:15.365056 1475658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.no-preload-178365 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-178365]
	I1217 01:53:15.453396 1475658 provision.go:177] copyRemoteCerts
	I1217 01:53:15.453501 1475658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:53:15.453562 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.472633 1475658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34239 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 01:53:15.569493 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:53:15.587010 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:53:15.604531 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:53:15.621830 1475658 provision.go:87] duration metric: took 275.690424ms to configureAuth
	I1217 01:53:15.621859 1475658 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:53:15.622043 1475658 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:53:15.622055 1475658 machine.go:97] duration metric: took 3.787465996s to provisionDockerMachine
	I1217 01:53:15.622062 1475658 client.go:176] duration metric: took 5.717518624s to LocalClient.Create
	I1217 01:53:15.622081 1475658 start.go:167] duration metric: took 5.717579728s to libmachine.API.Create "no-preload-178365"
	I1217 01:53:15.622091 1475658 start.go:293] postStartSetup for "no-preload-178365" (driver="docker")
	I1217 01:53:15.622101 1475658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:53:15.622152 1475658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:53:15.622194 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.640347 1475658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34239 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 01:53:15.739018 1475658 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:53:15.742918 1475658 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:53:15.742945 1475658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:53:15.742956 1475658 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:53:15.743019 1475658 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:53:15.743107 1475658 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:53:15.743211 1475658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:53:15.751166 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:53:15.772669 1475658 start.go:296] duration metric: took 150.563623ms for postStartSetup
	I1217 01:53:15.773096 1475658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 01:53:15.790774 1475658 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 01:53:15.791051 1475658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:53:15.791101 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.808284 1475658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34239 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 01:53:15.902697 1475658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:53:15.907257 1475658 start.go:128] duration metric: took 6.006596696s to createHost
	I1217 01:53:15.907283 1475658 start.go:83] releasing machines lock for "no-preload-178365", held for 6.00672345s
	I1217 01:53:15.907370 1475658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 01:53:15.925231 1475658 ssh_runner.go:195] Run: cat /version.json
	I1217 01:53:15.925286 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.925548 1475658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:53:15.925632 1475658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 01:53:15.945596 1475658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34239 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 01:53:15.946393 1475658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34239 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 01:53:16.046139 1475658 ssh_runner.go:195] Run: systemctl --version
	I1217 01:53:16.144062 1475658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:53:16.148620 1475658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:53:16.148716 1475658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:53:16.180251 1475658 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:53:16.180326 1475658 start.go:496] detecting cgroup driver to use...
	I1217 01:53:16.180375 1475658 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:53:16.180483 1475658 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:53:16.196083 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:53:16.209497 1475658 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:53:16.209574 1475658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:53:16.230882 1475658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:53:16.251820 1475658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:53:16.379539 1475658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:53:16.506839 1475658 docker.go:234] disabling docker service ...
	I1217 01:53:16.506947 1475658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:53:16.529548 1475658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:53:16.544975 1475658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:53:16.658005 1475658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:53:16.790919 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:53:16.807464 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:53:16.823380 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:53:16.833184 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:53:16.842844 1475658 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:53:16.842964 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:53:16.851906 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:53:16.861383 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:53:16.871801 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:53:16.881436 1475658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:53:16.890771 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:53:16.901725 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:53:16.910507 1475658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:53:16.919819 1475658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:53:16.928618 1475658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:53:16.936114 1475658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:17.050893 1475658 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:53:17.158539 1475658 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:53:17.158610 1475658 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:53:17.164149 1475658 start.go:564] Will wait 60s for crictl version
	I1217 01:53:17.164219 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.169003 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:53:17.195640 1475658 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:53:17.195710 1475658 ssh_runner.go:195] Run: containerd --version
	I1217 01:53:17.218419 1475658 ssh_runner.go:195] Run: containerd --version
	I1217 01:53:17.244036 1475658 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:53:17.247016 1475658 cli_runner.go:164] Run: docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:53:17.272463 1475658 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 01:53:17.276554 1475658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:53:17.286663 1475658 kubeadm.go:884] updating cluster {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:53:17.286778 1475658 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:53:17.286834 1475658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:53:17.313538 1475658 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:53:17.313565 1475658 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 01:53:17.313620 1475658 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:17.313735 1475658 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.313886 1475658 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.313902 1475658 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.313962 1475658 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.314010 1475658 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.314052 1475658 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.313888 1475658 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 01:53:17.316117 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.316382 1475658 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.316558 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.316740 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.316909 1475658 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.317058 1475658 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 01:53:17.317225 1475658 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:17.317604 1475658 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.567857 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1217 01:53:17.567932 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1217 01:53:17.569160 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1217 01:53:17.569224 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.574216 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1217 01:53:17.574327 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.595312 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1217 01:53:17.595383 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.604390 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1217 01:53:17.604459 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.609150 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1217 01:53:17.609269 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.631625 1475658 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1217 01:53:17.631691 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.688616 1475658 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1217 01:53:17.688795 1475658 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.688871 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.688695 1475658 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1217 01:53:17.688962 1475658 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.689001 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.688708 1475658 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1217 01:53:17.689073 1475658 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 01:53:17.689109 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.688753 1475658 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1217 01:53:17.689172 1475658 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.689216 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.694724 1475658 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1217 01:53:17.694924 1475658 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.694865 1475658 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1217 01:53:17.694988 1475658 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.695041 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.695133 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.700200 1475658 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1217 01:53:17.700282 1475658 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.700352 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:17.705271 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.705397 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.705494 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:53:17.705619 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.706232 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.706494 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.710281 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.826336 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.826411 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.826451 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:53:17.826519 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.826586 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.826646 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:17.826725 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.935391 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:53:17.935464 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:53:17.935506 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:53:17.935581 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:53:17.935633 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:53:17.935689 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:53:17.935789 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:53:18.039282 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:18.039375 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1217 01:53:18.039475 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:53:18.039535 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1217 01:53:18.039592 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 01:53:18.039659 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:53:18.039676 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:53:18.039746 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:18.059010 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1217 01:53:18.059247 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1217 01:53:18.059280 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1217 01:53:18.059368 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1217 01:53:18.059388 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1217 01:53:18.059444 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:53:18.059449 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1217 01:53:18.059535 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1217 01:53:18.059132 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1217 01:53:18.059665 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:53:18.059202 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 01:53:18.059726 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1217 01:53:18.059106 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1217 01:53:18.059809 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 01:53:18.080519 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 01:53:18.080619 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1217 01:53:18.123401 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1217 01:53:18.123709 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1217 01:53:18.123485 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1217 01:53:18.123830 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1217 01:53:18.263195 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 01:53:18.263466 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1217 01:53:18.423134 1475658 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1217 01:53:18.424181 1475658 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1217 01:53:18.424286 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:18.621396 1475658 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1217 01:53:18.621495 1475658 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:18.621581 1475658 ssh_runner.go:195] Run: which crictl
	I1217 01:53:18.621678 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1217 01:53:18.660206 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:53:18.660359 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:53:18.696121 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:20.013419 1475658 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.353015631s)
	I1217 01:53:20.013458 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1217 01:53:20.013480 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:53:20.013535 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:53:20.013604 1475658 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.317257151s)
	I1217 01:53:20.013659 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:21.095760 1475658 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.082059111s)
	I1217 01:53:21.095863 1475658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:53:21.101374 1475658 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.087811774s)
	I1217 01:53:21.101421 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 01:53:21.101442 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:21.101496 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:21.138605 1475658 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 01:53:21.138777 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:53:22.051306 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 01:53:22.051342 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1217 01:53:22.051471 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1217 01:53:22.051498 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:53:22.051547 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:53:23.142289 1475658 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.090718186s)
	I1217 01:53:23.142370 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1217 01:53:23.142426 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:53:23.142511 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:53:24.277326 1475658 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.134785458s)
	I1217 01:53:24.277355 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1217 01:53:24.277373 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:53:24.277421 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:53:25.724326 1475658 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.446875945s)
	I1217 01:53:25.724362 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1217 01:53:25.724381 1475658 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:53:25.724431 1475658 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:53:26.119254 1475658 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 01:53:26.119286 1475658 cache_images.go:125] Successfully loaded all cached images
	I1217 01:53:26.119292 1475658 cache_images.go:94] duration metric: took 8.805702576s to LoadCachedImages
	I1217 01:53:26.119304 1475658 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:53:26.119400 1475658 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-178365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:53:26.119462 1475658 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:53:26.145716 1475658 cni.go:84] Creating CNI manager for ""
	I1217 01:53:26.145739 1475658 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:53:26.145759 1475658 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:53:26.145782 1475658 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178365 NodeName:no-preload-178365 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:53:26.145898 1475658 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-178365"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:53:26.145967 1475658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:53:26.155343 1475658 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1217 01:53:26.155407 1475658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:53:26.163398 1475658 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1217 01:53:26.163694 1475658 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1217 01:53:26.163767 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1217 01:53:26.163575 1475658 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1217 01:53:26.168402 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1217 01:53:26.168437 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1217 01:53:27.165090 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:53:27.186285 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1217 01:53:27.191155 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1217 01:53:27.191264 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1217 01:53:27.240188 1475658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1217 01:53:27.249985 1475658 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1217 01:53:27.250077 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1217 01:53:27.881263 1475658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:53:27.891889 1475658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:53:27.907851 1475658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:53:27.923539 1475658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 01:53:27.938588 1475658 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:53:27.942341 1475658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:53:27.957370 1475658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:28.083739 1475658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:53:28.116713 1475658 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365 for IP: 192.168.76.2
	I1217 01:53:28.116784 1475658 certs.go:195] generating shared ca certs ...
	I1217 01:53:28.116841 1475658 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.117052 1475658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:53:28.117137 1475658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:53:28.117167 1475658 certs.go:257] generating profile certs ...
	I1217 01:53:28.117253 1475658 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key
	I1217 01:53:28.117298 1475658 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.crt with IP's: []
	I1217 01:53:28.421851 1475658 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.crt ...
	I1217 01:53:28.421884 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.crt: {Name:mk5050329787d3c8c01a2be4eb778d959a2fad72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.422081 1475658 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key ...
	I1217 01:53:28.422094 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key: {Name:mk6c95ea27fb802f55bd123a52ba9fb08a779993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.422184 1475658 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2
	I1217 01:53:28.422201 1475658 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt.2535d4d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:53:28.632042 1475658 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt.2535d4d2 ...
	I1217 01:53:28.632075 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt.2535d4d2: {Name:mkd1ed4cdad4077928e46c49323c8c747b840715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.632251 1475658 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2 ...
	I1217 01:53:28.632266 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2: {Name:mk4190b2c06c9d9b7843a24c0cfd585446477048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.632351 1475658 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt.2535d4d2 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt
	I1217 01:53:28.632436 1475658 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key
	I1217 01:53:28.632499 1475658 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key
	I1217 01:53:28.632518 1475658 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt with IP's: []
	I1217 01:53:28.784008 1475658 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt ...
	I1217 01:53:28.784035 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt: {Name:mk760cc8012f37ca1b2cdd7078a633a248e7821f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.784221 1475658 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key ...
	I1217 01:53:28.784238 1475658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key: {Name:mk7830f68723d6ed500411d5444b8c8ab9f0cb5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:28.784429 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:53:28.784477 1475658 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:53:28.784490 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:53:28.784517 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:53:28.784550 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:53:28.784575 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:53:28.784624 1475658 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:53:28.785194 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:53:28.806371 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:53:28.824519 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:53:28.844099 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:53:28.862804 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:53:28.888158 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 01:53:28.906454 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:53:28.924772 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:53:28.946647 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:53:28.964778 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:53:28.983589 1475658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:53:29.005226 1475658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:53:29.018942 1475658 ssh_runner.go:195] Run: openssl version
	I1217 01:53:29.025427 1475658 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:53:29.033165 1475658 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:53:29.040967 1475658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:53:29.044752 1475658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:53:29.044814 1475658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:53:29.087917 1475658 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:53:29.096270 1475658 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:53:29.106202 1475658 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:29.116386 1475658 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:53:29.125991 1475658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:29.129980 1475658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:29.130045 1475658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:29.172281 1475658 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:53:29.181090 1475658 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:53:29.188677 1475658 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:53:29.196949 1475658 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:53:29.204513 1475658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:53:29.208338 1475658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:53:29.208405 1475658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:53:29.249551 1475658 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:53:29.257092 1475658 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:53:29.264455 1475658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:53:29.268116 1475658 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:53:29.268172 1475658 kubeadm.go:401] StartCluster: {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:53:29.268248 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:53:29.268308 1475658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:53:29.295161 1475658 cri.go:89] found id: ""
	I1217 01:53:29.295234 1475658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:53:29.303195 1475658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:53:29.311061 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:53:29.311188 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:53:29.319562 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:53:29.319622 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:53:29.319699 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:53:29.327670 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:53:29.327763 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:53:29.335270 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:53:29.343308 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:53:29.343384 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:53:29.351295 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:53:29.359401 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:53:29.359520 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:53:29.366885 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:53:29.375387 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:53:29.375455 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:53:29.383148 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:53:29.505402 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:53:29.505994 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:53:29.579297 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:57:34.124748 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:57:34.124781 1475658 kubeadm.go:319] 
	I1217 01:57:34.124851 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:57:34.130032 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.130094 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.130184 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.130239 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.130274 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.130319 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.130369 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.130417 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.130466 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.130513 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.130562 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.130607 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.130655 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.130701 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.130774 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.130869 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.130959 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.131021 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.134054 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.134142 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.134206 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.134273 1475658 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:57:34.134329 1475658 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:57:34.134389 1475658 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:57:34.134439 1475658 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:57:34.134492 1475658 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:57:34.134614 1475658 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134712 1475658 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:57:34.134885 1475658 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134988 1475658 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:57:34.135097 1475658 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:57:34.135183 1475658 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:57:34.135283 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:34.135344 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:34.135402 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:34.135459 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:34.135521 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:34.135575 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:34.135655 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:34.135721 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:34.138598 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:34.138713 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:34.138799 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:34.138871 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:34.138982 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:34.139083 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:34.139203 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:34.139301 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:34.139344 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:34.139483 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:34.139594 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.139663 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005993508s
	I1217 01:57:34.139667 1475658 kubeadm.go:319] 
	I1217 01:57:34.139728 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:57:34.139770 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:57:34.139882 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:57:34.139887 1475658 kubeadm.go:319] 
	I1217 01:57:34.139998 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:57:34.140032 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:57:34.140065 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:57:34.140174 1475658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:57:34.140253 1475658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:57:34.140626 1475658 kubeadm.go:319] 
	I1217 01:57:34.576208 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:57:34.589972 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:34.590043 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:34.598643 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:34.598705 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:34.598780 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:34.606738 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:34.606852 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:34.614781 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:34.622706 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:34.622772 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:34.630400 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.638446 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:34.638512 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.646373 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:34.654277 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:34.654364 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:34.662056 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:34.702011 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.702113 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.773814 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.773913 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.773969 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.774045 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.774109 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.774187 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.774266 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.774339 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.774416 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.774474 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.774547 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.774609 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.846561 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.846676 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.846767 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.854122 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.857357 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.857482 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.857567 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.857679 1475658 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:57:34.857759 1475658 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:57:34.857854 1475658 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:57:34.857924 1475658 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:57:34.858004 1475658 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:57:34.858087 1475658 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:57:34.858187 1475658 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:57:34.858274 1475658 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:57:34.858318 1475658 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:57:34.858386 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:35.122967 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:35.269702 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:35.473145 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:36.090186 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:36.438081 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:36.439114 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:36.441843 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:36.444972 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:36.445093 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:36.445187 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:36.447586 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:36.469683 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:36.469812 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:36.477712 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:36.478146 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:36.478375 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:36.619400 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:36.619522 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:36.620744 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001388969s
	I1217 02:01:36.620785 1475658 kubeadm.go:319] 
	I1217 02:01:36.620840 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:36.620873 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:36.620977 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:36.620988 1475658 kubeadm.go:319] 
	I1217 02:01:36.621087 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:36.621122 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:36.621154 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:36.621162 1475658 kubeadm.go:319] 
	I1217 02:01:36.624858 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:01:36.625354 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:36.625468 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:36.625731 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:36.625742 1475658 kubeadm.go:319] 
	I1217 02:01:36.625808 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:36.625889 1475658 kubeadm.go:403] duration metric: took 8m7.357719708s to StartCluster
	I1217 02:01:36.625944 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:36.626024 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:36.652571 1475658 cri.go:89] found id: ""
	I1217 02:01:36.652609 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.652624 1475658 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:36.652631 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:01:36.652704 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:36.678690 1475658 cri.go:89] found id: ""
	I1217 02:01:36.678713 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.678721 1475658 logs.go:284] No container was found matching "etcd"
	I1217 02:01:36.678728 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:01:36.678789 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:36.705351 1475658 cri.go:89] found id: ""
	I1217 02:01:36.705375 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.705383 1475658 logs.go:284] No container was found matching "coredns"
	I1217 02:01:36.705389 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:36.705452 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:36.730965 1475658 cri.go:89] found id: ""
	I1217 02:01:36.730992 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.731001 1475658 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:36.731008 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:36.731070 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:36.760345 1475658 cri.go:89] found id: ""
	I1217 02:01:36.760370 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.760379 1475658 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:36.760385 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:36.760446 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:36.785560 1475658 cri.go:89] found id: ""
	I1217 02:01:36.785583 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.785592 1475658 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:36.785599 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:36.785697 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:36.814303 1475658 cri.go:89] found id: ""
	I1217 02:01:36.814328 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.814337 1475658 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:36.814347 1475658 logs.go:123] Gathering logs for container status ...
	I1217 02:01:36.814359 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:36.842640 1475658 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:36.842668 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:36.901858 1475658 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:36.901897 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:36.918036 1475658 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:36.918069 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:36.984314 1475658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:36.984350 1475658 logs.go:123] Gathering logs for containerd ...
	I1217 02:01:36.984362 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:01:37.028786 1475658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:01:37.028860 1475658 out.go:285] * 
	* 
	W1217 02:01:37.028917 1475658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.028931 1475658 out.go:285] * 
	* 
	W1217 02:01:37.031068 1475658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:01:37.037220 1475658 out.go:203] 
	W1217 02:01:37.040930 1475658 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.041001 1475658 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:01:37.041022 1475658 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:01:37.044273 1475658 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1475961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:53:10.944588207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbc378cb18c4db6321bba9064bec37ae2907203c00dcd497af9edc9b3f71361f",
	            "SandboxKey": "/var/run/docker/netns/dbc378cb18c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:a8:78:cd:87:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "46c074d2d98270a72981dceacb4c45383893c762846fd2a67a1498e3670844fd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 6 (344.766281ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:37.475009 1491524 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-069646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:51 UTC │
	│ start   │ -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:52 UTC │
	│ image   │ old-k8s-version-859530 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ pause   │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ unpause │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:55:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:55:11.587586 1483412 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:11.587793 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.587821 1483412 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:11.587840 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.588238 1483412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:55:11.589101 1483412 out.go:368] Setting JSON to false
	I1217 01:55:11.589983 1483412 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27462,"bootTime":1765909050,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:55:11.590050 1483412 start.go:143] virtualization:  
	I1217 01:55:11.594008 1483412 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:55:11.598404 1483412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:55:11.598486 1483412 notify.go:221] Checking for updates...
	I1217 01:55:11.605445 1483412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:55:11.608601 1483412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:55:11.611778 1483412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:55:11.614850 1483412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:55:11.617933 1483412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:55:11.621419 1483412 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:11.621527 1483412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:55:11.640802 1483412 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:55:11.640922 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.701423 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.691901377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.701533 1483412 docker.go:319] overlay module found
	I1217 01:55:11.704806 1483412 out.go:179] * Using the docker driver based on user configuration
	I1217 01:55:11.707752 1483412 start.go:309] selected driver: docker
	I1217 01:55:11.707769 1483412 start.go:927] validating driver "docker" against <nil>
	I1217 01:55:11.707784 1483412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:55:11.708522 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.771255 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.762421806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.771409 1483412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:55:11.771445 1483412 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:55:11.771663 1483412 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:55:11.774669 1483412 out.go:179] * Using Docker driver with root privileges
	I1217 01:55:11.777523 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:11.777592 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:11.777607 1483412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:55:11.777735 1483412 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:11.780890 1483412 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 01:55:11.783718 1483412 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:55:11.786584 1483412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:55:11.789380 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:11.789429 1483412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:55:11.789441 1483412 cache.go:65] Caching tarball of preloaded images
	I1217 01:55:11.789467 1483412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:55:11.789532 1483412 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:55:11.789541 1483412 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:55:11.789677 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:11.789696 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json: {Name:mk81bb26d654057444403d949cc7b962f958f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:11.808673 1483412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:55:11.808698 1483412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:55:11.808713 1483412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:55:11.808743 1483412 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:55:11.808846 1483412 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "newest-cni-456492"
	I1217 01:55:11.808876 1483412 start.go:93] Provisioning new machine with config: &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:55:11.808947 1483412 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:55:11.812418 1483412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:55:11.812643 1483412 start.go:159] libmachine.API.Create for "newest-cni-456492" (driver="docker")
	I1217 01:55:11.812678 1483412 client.go:173] LocalClient.Create starting
	I1217 01:55:11.812766 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:55:11.812806 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812824 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.812874 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:55:11.812896 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812911 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.813288 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:55:11.828937 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:55:11.829030 1483412 network_create.go:284] running [docker network inspect newest-cni-456492] to gather additional debugging logs...
	I1217 01:55:11.829050 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492
	W1217 01:55:11.845086 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 returned with exit code 1
	I1217 01:55:11.845116 1483412 network_create.go:287] error running [docker network inspect newest-cni-456492]: docker network inspect newest-cni-456492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-456492 not found
	I1217 01:55:11.845144 1483412 network_create.go:289] output of [docker network inspect newest-cni-456492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-456492 not found
	
	** /stderr **
	I1217 01:55:11.845236 1483412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:11.862130 1483412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:55:11.862454 1483412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:55:11.862764 1483412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:55:11.862966 1483412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 01:55:11.863436 1483412 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bb4b0}
	I1217 01:55:11.863452 1483412 network_create.go:124] attempt to create docker network newest-cni-456492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:55:11.863519 1483412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-456492 newest-cni-456492
	I1217 01:55:11.939566 1483412 network_create.go:108] docker network newest-cni-456492 192.168.85.0/24 created
	I1217 01:55:11.939593 1483412 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-456492" container
	I1217 01:55:11.939681 1483412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:55:11.956827 1483412 cli_runner.go:164] Run: docker volume create newest-cni-456492 --label name.minikube.sigs.k8s.io=newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:55:11.974528 1483412 oci.go:103] Successfully created a docker volume newest-cni-456492
	I1217 01:55:11.974628 1483412 cli_runner.go:164] Run: docker run --rm --name newest-cni-456492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --entrypoint /usr/bin/test -v newest-cni-456492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:55:12.497008 1483412 oci.go:107] Successfully prepared a docker volume newest-cni-456492
	I1217 01:55:12.497078 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:12.497091 1483412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:55:12.497172 1483412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:55:16.389962 1483412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.892749984s)
	I1217 01:55:16.389996 1483412 kic.go:203] duration metric: took 3.892902757s to extract preloaded images to volume ...
	W1217 01:55:16.390136 1483412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:55:16.390261 1483412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:55:16.462546 1483412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-456492 --name newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-456492 --network newest-cni-456492 --ip 192.168.85.2 --volume newest-cni-456492:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:55:16.772361 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Running}}
	I1217 01:55:16.793387 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:16.820136 1483412 cli_runner.go:164] Run: docker exec newest-cni-456492 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:55:16.881491 1483412 oci.go:144] the created container "newest-cni-456492" has a running status.
	I1217 01:55:16.881521 1483412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa...
	I1217 01:55:17.289070 1483412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:55:17.323822 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.352076 1483412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:55:17.352103 1483412 kic_runner.go:114] Args: [docker exec --privileged newest-cni-456492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:55:17.412601 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.440021 1483412 machine.go:94] provisionDockerMachine start ...
	I1217 01:55:17.440112 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:17.465337 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:17.465706 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:17.465717 1483412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:55:17.466482 1483412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45132->127.0.0.1:34249: read: connection reset by peer
	I1217 01:55:20.597038 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.597109 1483412 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 01:55:20.597212 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.614509 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.614828 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.614859 1483412 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 01:55:20.756257 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.756341 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.774598 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.774975 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.774999 1483412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:55:20.905912 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:55:20.905939 1483412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:55:20.905956 1483412 ubuntu.go:190] setting up certificates
	I1217 01:55:20.905965 1483412 provision.go:84] configureAuth start
	I1217 01:55:20.906024 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:20.923247 1483412 provision.go:143] copyHostCerts
	I1217 01:55:20.923326 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:55:20.923339 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:55:20.923416 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:55:20.923533 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:55:20.923544 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:55:20.923576 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:55:20.923649 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:55:20.923659 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:55:20.923689 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:55:20.923744 1483412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 01:55:21.003325 1483412 provision.go:177] copyRemoteCerts
	I1217 01:55:21.003406 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:55:21.003466 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.021337 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.118292 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:55:21.145239 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:55:21.164973 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:55:21.184653 1483412 provision.go:87] duration metric: took 278.664546ms to configureAuth
	I1217 01:55:21.184681 1483412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:55:21.184876 1483412 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:21.184890 1483412 machine.go:97] duration metric: took 3.744849982s to provisionDockerMachine
	I1217 01:55:21.184897 1483412 client.go:176] duration metric: took 9.372209957s to LocalClient.Create
	I1217 01:55:21.184913 1483412 start.go:167] duration metric: took 9.372271349s to libmachine.API.Create "newest-cni-456492"
	I1217 01:55:21.184924 1483412 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 01:55:21.184935 1483412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:55:21.184993 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:55:21.185038 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.202893 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.301704 1483412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:55:21.305094 1483412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:55:21.305120 1483412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:55:21.305132 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:55:21.305183 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:55:21.305257 1483412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:55:21.305367 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:55:21.313575 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:21.332690 1483412 start.go:296] duration metric: took 147.751178ms for postStartSetup
	I1217 01:55:21.333071 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.349950 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:21.350233 1483412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:55:21.350284 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.367086 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.458630 1483412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:55:21.463314 1483412 start.go:128] duration metric: took 9.65435334s to createHost
	I1217 01:55:21.463343 1483412 start.go:83] releasing machines lock for "newest-cni-456492", held for 9.654483449s
	I1217 01:55:21.463413 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.480150 1483412 ssh_runner.go:195] Run: cat /version.json
	I1217 01:55:21.480207 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.480490 1483412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:55:21.480549 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.503493 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.506377 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.593349 1483412 ssh_runner.go:195] Run: systemctl --version
	I1217 01:55:21.687982 1483412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:55:21.692115 1483412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:55:21.692182 1483412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:55:21.718403 1483412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:55:21.718427 1483412 start.go:496] detecting cgroup driver to use...
	I1217 01:55:21.718460 1483412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:55:21.718523 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:55:21.733259 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:55:21.746485 1483412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:55:21.746571 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:55:21.764553 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:55:21.782958 1483412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:55:21.908620 1483412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:55:22.026459 1483412 docker.go:234] disabling docker service ...
	I1217 01:55:22.026538 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:55:22.052603 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:55:22.068218 1483412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:55:22.193394 1483412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:55:22.321475 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:55:22.334922 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:55:22.349881 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:55:22.359035 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:55:22.368328 1483412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:55:22.368453 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:55:22.377717 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.387475 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:55:22.396690 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.405767 1483412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:55:22.414387 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:55:22.423447 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:55:22.432777 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:55:22.442244 1483412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:55:22.450102 1483412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:55:22.457779 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:22.584574 1483412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:55:22.739170 1483412 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:55:22.739315 1483412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:55:22.743653 1483412 start.go:564] Will wait 60s for crictl version
	I1217 01:55:22.743721 1483412 ssh_runner.go:195] Run: which crictl
	I1217 01:55:22.747627 1483412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:55:22.774963 1483412 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:55:22.775088 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.795646 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.822177 1483412 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:55:22.825213 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:22.841339 1483412 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 01:55:22.845097 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:22.857844 1483412 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:55:22.860750 1483412 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:55:22.860891 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:22.860986 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.887811 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.887838 1483412 containerd.go:534] Images already preloaded, skipping extraction
	I1217 01:55:22.887921 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.916774 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.916798 1483412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:55:22.916806 1483412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:55:22.916901 1483412 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:55:22.916973 1483412 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:55:22.941450 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:22.941474 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:22.941497 1483412 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:55:22.941521 1483412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:55:22.941668 1483412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:55:22.941741 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:55:22.949446 1483412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:55:22.949536 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:55:22.957307 1483412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:55:22.970080 1483412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:55:22.983144 1483412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 01:55:22.996455 1483412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:55:23.000264 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:23.011956 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:23.132195 1483412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:55:23.153898 1483412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 01:55:23.153924 1483412 certs.go:195] generating shared ca certs ...
	I1217 01:55:23.153953 1483412 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.154120 1483412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:55:23.154167 1483412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:55:23.154179 1483412 certs.go:257] generating profile certs ...
	I1217 01:55:23.154252 1483412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 01:55:23.154267 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt with IP's: []
	I1217 01:55:23.536556 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt ...
	I1217 01:55:23.536598 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt: {Name:mk5f328f97a5398eaf8448e799e55e14628a21cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536799 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key ...
	I1217 01:55:23.536813 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key: {Name:mk204e71ac4a7537095f4378fcacae497aae9e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536900 1483412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 01:55:23.536919 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:55:23.700587 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d ...
	I1217 01:55:23.700617 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d: {Name:mk2ff6ffd7e0f9e8790c41f75004f783e2e2cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700810 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d ...
	I1217 01:55:23.700838 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d: {Name:mk4a8fd878c1db6fa4ca6d31ac312311a9e574fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700939 1483412 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt
	I1217 01:55:23.701025 1483412 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key
	I1217 01:55:23.701086 1483412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 01:55:23.701104 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt with IP's: []
	I1217 01:55:24.186185 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt ...
	I1217 01:55:24.186218 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt: {Name:mk4e097689774236e217287c4769a9bc6b62d157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186434 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key ...
	I1217 01:55:24.186460 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key: {Name:mk9311419a1f9f3ab4e171bbfc5a685160d56892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186687 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:55:24.186737 1483412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:55:24.186753 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:55:24.186781 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:55:24.186819 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:55:24.186847 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:55:24.186901 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:24.187489 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:55:24.207140 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:55:24.225813 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:55:24.244898 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:55:24.264402 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:55:24.283038 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:55:24.302197 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:55:24.320347 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:55:24.339022 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:55:24.357411 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:55:24.375801 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:55:24.394312 1483412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:55:24.407959 1483412 ssh_runner.go:195] Run: openssl version
	I1217 01:55:24.414593 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.422149 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:55:24.429938 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433843 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433913 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.475535 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.483235 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.490706 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.498434 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:55:24.506686 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510403 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510492 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.551573 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:55:24.559261 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:55:24.566821 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.574182 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:55:24.581528 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585424 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585508 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.628267 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:55:24.636095 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:55:24.643970 1483412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:55:24.648671 1483412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:55:24.648775 1483412 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:24.648946 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:55:24.649043 1483412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:55:24.677969 1483412 cri.go:89] found id: ""
	I1217 01:55:24.678093 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:55:24.688459 1483412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:55:24.696458 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:55:24.696550 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:55:24.704828 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:55:24.704848 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:55:24.704931 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:55:24.712883 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:55:24.712983 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:55:24.720826 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:55:24.728999 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:55:24.729100 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:55:24.736825 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.744799 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:55:24.744867 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.752477 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:55:24.760816 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:55:24.760931 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:55:24.768678 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:55:24.810821 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:55:24.811126 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:55:24.896174 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:55:24.896294 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:55:24.896359 1483412 kubeadm.go:319] OS: Linux
	I1217 01:55:24.896426 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:55:24.896502 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:55:24.896566 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:55:24.896639 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:55:24.896704 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:55:24.896779 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:55:24.896863 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:55:24.896941 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:55:24.897010 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:55:24.971043 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:55:24.971234 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:55:24.971378 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:55:24.982063 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:55:24.988218 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:55:24.988318 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:55:24.988395 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:55:25.419455 1483412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:55:25.522339 1483412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:55:25.598229 1483412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:55:25.671518 1483412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:55:25.854804 1483412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:55:25.855019 1483412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.196066 1483412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:55:26.196425 1483412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.785707 1483412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:55:26.841556 1483412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:55:27.019008 1483412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:55:27.019328 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:55:27.196727 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:55:27.751450 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:55:27.908167 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:55:28.296645 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:55:28.549325 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:55:28.550095 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:55:28.554755 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:55:28.558438 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:55:28.558547 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:55:28.558629 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:55:28.558695 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:55:28.574196 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:55:28.574560 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:55:28.582119 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:55:28.582467 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:55:28.582759 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:55:28.732745 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:55:28.732882 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.124748 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:57:34.124781 1475658 kubeadm.go:319] 
	I1217 01:57:34.124851 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:57:34.130032 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.130094 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.130184 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.130239 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.130274 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.130319 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.130369 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.130417 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.130466 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.130513 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.130562 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.130607 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.130655 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.130701 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.130774 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.130869 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.130959 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.131021 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.134054 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.134142 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.134206 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.134273 1475658 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:57:34.134329 1475658 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:57:34.134389 1475658 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:57:34.134439 1475658 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:57:34.134492 1475658 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:57:34.134614 1475658 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134712 1475658 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:57:34.134885 1475658 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134988 1475658 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:57:34.135097 1475658 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:57:34.135183 1475658 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:57:34.135283 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:34.135344 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:34.135402 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:34.135459 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:34.135521 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:34.135575 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:34.135655 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:34.135721 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:34.138598 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:34.138713 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:34.138799 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:34.138871 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:34.138982 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:34.139083 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:34.139203 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:34.139301 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:34.139344 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:34.139483 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:34.139594 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.139663 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005993508s
	I1217 01:57:34.139667 1475658 kubeadm.go:319] 
	I1217 01:57:34.139728 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:57:34.139770 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:57:34.139882 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:57:34.139887 1475658 kubeadm.go:319] 
	I1217 01:57:34.139998 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:57:34.140032 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:57:34.140065 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:57:34.140174 1475658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:57:34.140253 1475658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:57:34.140626 1475658 kubeadm.go:319] 
	I1217 01:57:34.576208 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:57:34.589972 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:34.590043 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:34.598643 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:34.598705 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:34.598780 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:34.606738 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:34.606852 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:34.614781 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:34.622706 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:34.622772 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:34.630400 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.638446 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:34.638512 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.646373 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:34.654277 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:34.654364 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:34.662056 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:34.702011 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.702113 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.773814 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.773913 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.773969 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.774045 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.774109 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.774187 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.774266 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.774339 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.774416 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.774474 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.774547 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.774609 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.846561 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.846676 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.846767 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.854122 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.857357 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.857482 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.857567 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.857679 1475658 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:57:34.857759 1475658 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:57:34.857854 1475658 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:57:34.857924 1475658 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:57:34.858004 1475658 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:57:34.858087 1475658 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:57:34.858187 1475658 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:57:34.858274 1475658 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:57:34.858318 1475658 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:57:34.858386 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:35.122967 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:35.269702 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:35.473145 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:36.090186 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:36.438081 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:36.439114 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:36.441843 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:36.444972 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:36.445093 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:36.445187 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:36.447586 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:36.469683 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:36.469812 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:36.477712 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:36.478146 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:36.478375 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:36.619400 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:36.619522 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:59:28.732281 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001211696s
	I1217 01:59:28.732307 1483412 kubeadm.go:319] 
	I1217 01:59:28.732365 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:59:28.732399 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:59:28.732504 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:59:28.732508 1483412 kubeadm.go:319] 
	I1217 01:59:28.732613 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:59:28.732645 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:59:28.732676 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:59:28.732680 1483412 kubeadm.go:319] 
	I1217 01:59:28.737697 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:59:28.738161 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:59:28.738281 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:59:28.738538 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:59:28.738549 1483412 kubeadm.go:319] 
	I1217 01:59:28.738623 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:59:28.738846 1483412 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:59:28.738945 1483412 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:59:29.148897 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:59:29.163236 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:59:29.163322 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:59:29.173290 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:59:29.173315 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:59:29.173378 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:59:29.189171 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:59:29.189238 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:59:29.198769 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:59:29.206895 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:59:29.206960 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:59:29.214464 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.222503 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:59:29.222596 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.230032 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:59:29.237621 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:59:29.237713 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:59:29.244936 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:59:29.283887 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:59:29.284148 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:59:29.355640 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:59:29.355800 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:59:29.355878 1483412 kubeadm.go:319] OS: Linux
	I1217 01:59:29.355962 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:59:29.356047 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:59:29.356127 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:59:29.356205 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:59:29.356285 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:59:29.356371 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:59:29.356449 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:59:29.356530 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:59:29.356609 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:59:29.424082 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:59:29.424247 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:59:29.424404 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:59:29.430675 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:59:29.436331 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:59:29.436427 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:59:29.436498 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:59:29.436614 1483412 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:59:29.436760 1483412 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:59:29.436868 1483412 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:59:29.436955 1483412 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:59:29.437066 1483412 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:59:29.437169 1483412 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:59:29.437294 1483412 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:59:29.437455 1483412 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:59:29.437914 1483412 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:59:29.438023 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:59:29.643674 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:59:29.811188 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:59:30.039930 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:59:30.429283 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:59:30.523266 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:59:30.523965 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:59:30.526610 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:59:30.529865 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:59:30.529993 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:59:30.530148 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:59:30.530270 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:59:30.551379 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:59:30.551496 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:59:30.562968 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:59:30.563492 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:59:30.563746 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:59:30.712531 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:59:30.712658 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:36.620744 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001388969s
	I1217 02:01:36.620785 1475658 kubeadm.go:319] 
	I1217 02:01:36.620840 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:36.620873 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:36.620977 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:36.620988 1475658 kubeadm.go:319] 
	I1217 02:01:36.621087 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:36.621122 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:36.621154 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:36.621162 1475658 kubeadm.go:319] 
	I1217 02:01:36.624858 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:01:36.625354 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:36.625468 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:36.625731 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:36.625742 1475658 kubeadm.go:319] 
	I1217 02:01:36.625808 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:36.625889 1475658 kubeadm.go:403] duration metric: took 8m7.357719708s to StartCluster
	I1217 02:01:36.625944 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:36.626024 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:36.652571 1475658 cri.go:89] found id: ""
	I1217 02:01:36.652609 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.652624 1475658 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:36.652631 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:01:36.652704 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:36.678690 1475658 cri.go:89] found id: ""
	I1217 02:01:36.678713 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.678721 1475658 logs.go:284] No container was found matching "etcd"
	I1217 02:01:36.678728 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:01:36.678789 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:36.705351 1475658 cri.go:89] found id: ""
	I1217 02:01:36.705375 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.705383 1475658 logs.go:284] No container was found matching "coredns"
	I1217 02:01:36.705389 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:36.705452 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:36.730965 1475658 cri.go:89] found id: ""
	I1217 02:01:36.730992 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.731001 1475658 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:36.731008 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:36.731070 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:36.760345 1475658 cri.go:89] found id: ""
	I1217 02:01:36.760370 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.760379 1475658 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:36.760385 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:36.760446 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:36.785560 1475658 cri.go:89] found id: ""
	I1217 02:01:36.785583 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.785592 1475658 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:36.785599 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:36.785697 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:36.814303 1475658 cri.go:89] found id: ""
	I1217 02:01:36.814328 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.814337 1475658 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:36.814347 1475658 logs.go:123] Gathering logs for container status ...
	I1217 02:01:36.814359 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:36.842640 1475658 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:36.842668 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:36.901858 1475658 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:36.901897 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:36.918036 1475658 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:36.918069 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:36.984314 1475658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:36.984350 1475658 logs.go:123] Gathering logs for containerd ...
	I1217 02:01:36.984362 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:01:37.028786 1475658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:01:37.028860 1475658 out.go:285] * 
	W1217 02:01:37.028917 1475658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.028931 1475658 out.go:285] * 
	W1217 02:01:37.031068 1475658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:01:37.037220 1475658 out.go:203] 
	W1217 02:01:37.040930 1475658 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.041001 1475658 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:01:37.041022 1475658 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:01:37.044273 1475658 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:53:20 no-preload-178365 containerd[756]: time="2025-12-17T01:53:20.013986261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.083205389Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.085894407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.093386032Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.094057489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.042937201Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.045143048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058075151Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058727605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.132008848Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.135132972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.143661850Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.144058260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.267145399Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.269771295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.277531008Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.278492420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.715372635Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.717609801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.726893123Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.727845953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.108154182Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.111113669Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120555130Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120954125Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:38.169959    5538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:38.170625    5538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:38.172256    5538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:38.172789    5538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:38.173957    5538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:01:38 up  7:44,  0 user,  load average: 0.14, 0.96, 1.67
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:01:34 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:35 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 02:01:35 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:35 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:35 no-preload-178365 kubelet[5339]: E1217 02:01:35.703534    5339 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:35 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:35 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:36 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 02:01:36 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:36 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:36 no-preload-178365 kubelet[5345]: E1217 02:01:36.436250    5345 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:36 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:36 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 kubelet[5433]: E1217 02:01:37.203785    5433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 kubelet[5491]: E1217 02:01:37.940889    5491 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 6 (362.309904ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:38.654708 1491744 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (509.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1217 01:55:35.918542 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:35.925014 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:35.936585 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:35.958120 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:35.999668 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:36.081261 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:36.242857 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:36.564677 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:37.206691 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.488173 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:41.049480 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:46.171349 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:56.413363 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:16.895009 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.441804 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.448184 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.459597 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.480959 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.522429 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.603913 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:33.765568 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:34.087315 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:34.729472 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:36.011306 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:38.572813 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:43.694678 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:53.937026 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:56.876747 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:57.856391 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:14.420554 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:55.381908 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:58:12.516342 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:58:19.778477 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:59:17.303300 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:59:33.491854 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:59:50.423211 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:00:09.433432 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:00:35.917873 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:01:03.620159 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:01:33.441846 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.635098688s)

                                                
                                                
-- stdout --
	* [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:55:11.587586 1483412 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:11.587793 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.587821 1483412 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:11.587840 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.588238 1483412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:55:11.589101 1483412 out.go:368] Setting JSON to false
	I1217 01:55:11.589983 1483412 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27462,"bootTime":1765909050,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:55:11.590050 1483412 start.go:143] virtualization:  
	I1217 01:55:11.594008 1483412 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:55:11.598404 1483412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:55:11.598486 1483412 notify.go:221] Checking for updates...
	I1217 01:55:11.605445 1483412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:55:11.608601 1483412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:55:11.611778 1483412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:55:11.614850 1483412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:55:11.617933 1483412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:55:11.621419 1483412 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:11.621527 1483412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:55:11.640802 1483412 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:55:11.640922 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.701423 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.691901377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.701533 1483412 docker.go:319] overlay module found
	I1217 01:55:11.704806 1483412 out.go:179] * Using the docker driver based on user configuration
	I1217 01:55:11.707752 1483412 start.go:309] selected driver: docker
	I1217 01:55:11.707769 1483412 start.go:927] validating driver "docker" against <nil>
	I1217 01:55:11.707784 1483412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:55:11.708522 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.771255 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.762421806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.771409 1483412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:55:11.771445 1483412 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:55:11.771663 1483412 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:55:11.774669 1483412 out.go:179] * Using Docker driver with root privileges
	I1217 01:55:11.777523 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:11.777592 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:11.777607 1483412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:55:11.777735 1483412 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:11.780890 1483412 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 01:55:11.783718 1483412 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:55:11.786584 1483412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:55:11.789380 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:11.789429 1483412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:55:11.789441 1483412 cache.go:65] Caching tarball of preloaded images
	I1217 01:55:11.789467 1483412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:55:11.789532 1483412 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:55:11.789541 1483412 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:55:11.789677 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:11.789696 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json: {Name:mk81bb26d654057444403d949cc7b962f958f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:11.808673 1483412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:55:11.808698 1483412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:55:11.808713 1483412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:55:11.808743 1483412 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:55:11.808846 1483412 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "newest-cni-456492"
	I1217 01:55:11.808876 1483412 start.go:93] Provisioning new machine with config: &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:55:11.808947 1483412 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:55:11.812418 1483412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:55:11.812643 1483412 start.go:159] libmachine.API.Create for "newest-cni-456492" (driver="docker")
	I1217 01:55:11.812678 1483412 client.go:173] LocalClient.Create starting
	I1217 01:55:11.812766 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:55:11.812806 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812824 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.812874 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:55:11.812896 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812911 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.813288 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:55:11.828937 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:55:11.829030 1483412 network_create.go:284] running [docker network inspect newest-cni-456492] to gather additional debugging logs...
	I1217 01:55:11.829050 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492
	W1217 01:55:11.845086 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 returned with exit code 1
	I1217 01:55:11.845116 1483412 network_create.go:287] error running [docker network inspect newest-cni-456492]: docker network inspect newest-cni-456492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-456492 not found
	I1217 01:55:11.845144 1483412 network_create.go:289] output of [docker network inspect newest-cni-456492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-456492 not found
	
	** /stderr **
	I1217 01:55:11.845236 1483412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:11.862130 1483412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:55:11.862454 1483412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:55:11.862764 1483412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:55:11.862966 1483412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 01:55:11.863436 1483412 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bb4b0}
	I1217 01:55:11.863452 1483412 network_create.go:124] attempt to create docker network newest-cni-456492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:55:11.863519 1483412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-456492 newest-cni-456492
	I1217 01:55:11.939566 1483412 network_create.go:108] docker network newest-cni-456492 192.168.85.0/24 created
	I1217 01:55:11.939593 1483412 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-456492" container
	I1217 01:55:11.939681 1483412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:55:11.956827 1483412 cli_runner.go:164] Run: docker volume create newest-cni-456492 --label name.minikube.sigs.k8s.io=newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:55:11.974528 1483412 oci.go:103] Successfully created a docker volume newest-cni-456492
	I1217 01:55:11.974628 1483412 cli_runner.go:164] Run: docker run --rm --name newest-cni-456492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --entrypoint /usr/bin/test -v newest-cni-456492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:55:12.497008 1483412 oci.go:107] Successfully prepared a docker volume newest-cni-456492
	I1217 01:55:12.497078 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:12.497091 1483412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:55:12.497172 1483412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:55:16.389962 1483412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.892749984s)
	I1217 01:55:16.389996 1483412 kic.go:203] duration metric: took 3.892902757s to extract preloaded images to volume ...
	W1217 01:55:16.390136 1483412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:55:16.390261 1483412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:55:16.462546 1483412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-456492 --name newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-456492 --network newest-cni-456492 --ip 192.168.85.2 --volume newest-cni-456492:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:55:16.772361 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Running}}
	I1217 01:55:16.793387 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:16.820136 1483412 cli_runner.go:164] Run: docker exec newest-cni-456492 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:55:16.881491 1483412 oci.go:144] the created container "newest-cni-456492" has a running status.
	I1217 01:55:16.881521 1483412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa...
	I1217 01:55:17.289070 1483412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:55:17.323822 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.352076 1483412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:55:17.352103 1483412 kic_runner.go:114] Args: [docker exec --privileged newest-cni-456492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:55:17.412601 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.440021 1483412 machine.go:94] provisionDockerMachine start ...
	I1217 01:55:17.440112 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:17.465337 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:17.465706 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:17.465717 1483412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:55:17.466482 1483412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45132->127.0.0.1:34249: read: connection reset by peer
	I1217 01:55:20.597038 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.597109 1483412 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 01:55:20.597212 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.614509 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.614828 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.614859 1483412 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 01:55:20.756257 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.756341 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.774598 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.774975 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.774999 1483412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:55:20.905912 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:55:20.905939 1483412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:55:20.905956 1483412 ubuntu.go:190] setting up certificates
	I1217 01:55:20.905965 1483412 provision.go:84] configureAuth start
	I1217 01:55:20.906024 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:20.923247 1483412 provision.go:143] copyHostCerts
	I1217 01:55:20.923326 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:55:20.923339 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:55:20.923416 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:55:20.923533 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:55:20.923544 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:55:20.923576 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:55:20.923649 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:55:20.923659 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:55:20.923689 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:55:20.923744 1483412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 01:55:21.003325 1483412 provision.go:177] copyRemoteCerts
	I1217 01:55:21.003406 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:55:21.003466 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.021337 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.118292 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:55:21.145239 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:55:21.164973 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:55:21.184653 1483412 provision.go:87] duration metric: took 278.664546ms to configureAuth
	I1217 01:55:21.184681 1483412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:55:21.184876 1483412 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:21.184890 1483412 machine.go:97] duration metric: took 3.744849982s to provisionDockerMachine
	I1217 01:55:21.184897 1483412 client.go:176] duration metric: took 9.372209957s to LocalClient.Create
	I1217 01:55:21.184913 1483412 start.go:167] duration metric: took 9.372271349s to libmachine.API.Create "newest-cni-456492"
	I1217 01:55:21.184924 1483412 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 01:55:21.184935 1483412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:55:21.184993 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:55:21.185038 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.202893 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.301704 1483412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:55:21.305094 1483412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:55:21.305120 1483412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:55:21.305132 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:55:21.305183 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:55:21.305257 1483412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:55:21.305367 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:55:21.313575 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:21.332690 1483412 start.go:296] duration metric: took 147.751178ms for postStartSetup
	I1217 01:55:21.333071 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.349950 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:21.350233 1483412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:55:21.350284 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.367086 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.458630 1483412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:55:21.463314 1483412 start.go:128] duration metric: took 9.65435334s to createHost
	I1217 01:55:21.463343 1483412 start.go:83] releasing machines lock for "newest-cni-456492", held for 9.654483449s
	I1217 01:55:21.463413 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.480150 1483412 ssh_runner.go:195] Run: cat /version.json
	I1217 01:55:21.480207 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.480490 1483412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:55:21.480549 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.503493 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.506377 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.593349 1483412 ssh_runner.go:195] Run: systemctl --version
	I1217 01:55:21.687982 1483412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:55:21.692115 1483412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:55:21.692182 1483412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:55:21.718403 1483412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:55:21.718427 1483412 start.go:496] detecting cgroup driver to use...
	I1217 01:55:21.718460 1483412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:55:21.718523 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:55:21.733259 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:55:21.746485 1483412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:55:21.746571 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:55:21.764553 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:55:21.782958 1483412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:55:21.908620 1483412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:55:22.026459 1483412 docker.go:234] disabling docker service ...
	I1217 01:55:22.026538 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:55:22.052603 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:55:22.068218 1483412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:55:22.193394 1483412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:55:22.321475 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:55:22.334922 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:55:22.349881 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:55:22.359035 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:55:22.368328 1483412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:55:22.368453 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:55:22.377717 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.387475 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:55:22.396690 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.405767 1483412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:55:22.414387 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:55:22.423447 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:55:22.432777 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:55:22.442244 1483412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:55:22.450102 1483412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:55:22.457779 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:22.584574 1483412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:55:22.739170 1483412 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:55:22.739315 1483412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:55:22.743653 1483412 start.go:564] Will wait 60s for crictl version
	I1217 01:55:22.743721 1483412 ssh_runner.go:195] Run: which crictl
	I1217 01:55:22.747627 1483412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:55:22.774963 1483412 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:55:22.775088 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.795646 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.822177 1483412 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:55:22.825213 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:22.841339 1483412 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 01:55:22.845097 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:22.857844 1483412 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:55:22.860750 1483412 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:55:22.860891 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:22.860986 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.887811 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.887838 1483412 containerd.go:534] Images already preloaded, skipping extraction
	I1217 01:55:22.887921 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.916774 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.916798 1483412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:55:22.916806 1483412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:55:22.916901 1483412 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:55:22.916973 1483412 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:55:22.941450 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:22.941474 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:22.941497 1483412 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:55:22.941521 1483412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:55:22.941668 1483412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:55:22.941741 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:55:22.949446 1483412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:55:22.949536 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:55:22.957307 1483412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:55:22.970080 1483412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:55:22.983144 1483412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 01:55:22.996455 1483412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:55:23.000264 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:23.011956 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:23.132195 1483412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:55:23.153898 1483412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 01:55:23.153924 1483412 certs.go:195] generating shared ca certs ...
	I1217 01:55:23.153953 1483412 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.154120 1483412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:55:23.154167 1483412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:55:23.154179 1483412 certs.go:257] generating profile certs ...
	I1217 01:55:23.154252 1483412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 01:55:23.154267 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt with IP's: []
	I1217 01:55:23.536556 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt ...
	I1217 01:55:23.536598 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt: {Name:mk5f328f97a5398eaf8448e799e55e14628a21cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536799 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key ...
	I1217 01:55:23.536813 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key: {Name:mk204e71ac4a7537095f4378fcacae497aae9e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536900 1483412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 01:55:23.536919 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:55:23.700587 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d ...
	I1217 01:55:23.700617 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d: {Name:mk2ff6ffd7e0f9e8790c41f75004f783e2e2cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700810 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d ...
	I1217 01:55:23.700838 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d: {Name:mk4a8fd878c1db6fa4ca6d31ac312311a9e574fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700939 1483412 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt
	I1217 01:55:23.701025 1483412 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key
	I1217 01:55:23.701086 1483412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 01:55:23.701104 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt with IP's: []
	I1217 01:55:24.186185 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt ...
	I1217 01:55:24.186218 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt: {Name:mk4e097689774236e217287c4769a9bc6b62d157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186434 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key ...
	I1217 01:55:24.186460 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key: {Name:mk9311419a1f9f3ab4e171bbfc5a685160d56892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186687 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:55:24.186737 1483412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:55:24.186753 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:55:24.186781 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:55:24.186819 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:55:24.186847 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:55:24.186901 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:24.187489 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:55:24.207140 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:55:24.225813 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:55:24.244898 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:55:24.264402 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:55:24.283038 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:55:24.302197 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:55:24.320347 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:55:24.339022 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:55:24.357411 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:55:24.375801 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:55:24.394312 1483412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:55:24.407959 1483412 ssh_runner.go:195] Run: openssl version
	I1217 01:55:24.414593 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.422149 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:55:24.429938 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433843 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433913 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.475535 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.483235 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.490706 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.498434 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:55:24.506686 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510403 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510492 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.551573 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:55:24.559261 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:55:24.566821 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.574182 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:55:24.581528 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585424 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585508 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.628267 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:55:24.636095 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:55:24.643970 1483412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:55:24.648671 1483412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:55:24.648775 1483412 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:24.648946 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:55:24.649043 1483412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:55:24.677969 1483412 cri.go:89] found id: ""
	I1217 01:55:24.678093 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:55:24.688459 1483412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:55:24.696458 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:55:24.696550 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:55:24.704828 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:55:24.704848 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:55:24.704931 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:55:24.712883 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:55:24.712983 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:55:24.720826 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:55:24.728999 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:55:24.729100 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:55:24.736825 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.744799 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:55:24.744867 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.752477 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:55:24.760816 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:55:24.760931 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:55:24.768678 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:55:24.810821 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:55:24.811126 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:55:24.896174 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:55:24.896294 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:55:24.896359 1483412 kubeadm.go:319] OS: Linux
	I1217 01:55:24.896426 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:55:24.896502 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:55:24.896566 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:55:24.896639 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:55:24.896704 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:55:24.896779 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:55:24.896863 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:55:24.896941 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:55:24.897010 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:55:24.971043 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:55:24.971234 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:55:24.971378 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:55:24.982063 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:55:24.988218 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:55:24.988318 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:55:24.988395 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:55:25.419455 1483412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:55:25.522339 1483412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:55:25.598229 1483412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:55:25.671518 1483412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:55:25.854804 1483412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:55:25.855019 1483412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.196066 1483412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:55:26.196425 1483412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.785707 1483412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:55:26.841556 1483412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:55:27.019008 1483412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:55:27.019328 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:55:27.196727 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:55:27.751450 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:55:27.908167 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:55:28.296645 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:55:28.549325 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:55:28.550095 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:55:28.554755 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:55:28.558438 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:55:28.558547 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:55:28.558629 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:55:28.558695 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:55:28.574196 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:55:28.574560 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:55:28.582119 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:55:28.582467 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:55:28.582759 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:55:28.732745 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:55:28.732882 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:59:28.732281 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001211696s
	I1217 01:59:28.732307 1483412 kubeadm.go:319] 
	I1217 01:59:28.732365 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:59:28.732399 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:59:28.732504 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:59:28.732508 1483412 kubeadm.go:319] 
	I1217 01:59:28.732613 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:59:28.732645 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:59:28.732676 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:59:28.732680 1483412 kubeadm.go:319] 
	I1217 01:59:28.737697 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:59:28.738161 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:59:28.738281 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:59:28.738538 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:59:28.738549 1483412 kubeadm.go:319] 
	I1217 01:59:28.738623 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:59:28.738846 1483412 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:59:28.738945 1483412 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:59:29.148897 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:59:29.163236 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:59:29.163322 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:59:29.173290 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:59:29.173315 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:59:29.173378 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:59:29.189171 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:59:29.189238 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:59:29.198769 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:59:29.206895 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:59:29.206960 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:59:29.214464 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.222503 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:59:29.222596 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.230032 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:59:29.237621 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:59:29.237713 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:59:29.244936 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:59:29.283887 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:59:29.284148 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:59:29.355640 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:59:29.355800 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:59:29.355878 1483412 kubeadm.go:319] OS: Linux
	I1217 01:59:29.355962 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:59:29.356047 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:59:29.356127 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:59:29.356205 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:59:29.356285 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:59:29.356371 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:59:29.356449 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:59:29.356530 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:59:29.356609 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:59:29.424082 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:59:29.424247 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:59:29.424404 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:59:29.430675 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:59:29.436331 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:59:29.436427 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:59:29.436498 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:59:29.436614 1483412 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:59:29.436760 1483412 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:59:29.436868 1483412 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:59:29.436955 1483412 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:59:29.437066 1483412 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:59:29.437169 1483412 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:59:29.437294 1483412 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:59:29.437455 1483412 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:59:29.437914 1483412 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:59:29.438023 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:59:29.643674 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:59:29.811188 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:59:30.039930 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:59:30.429283 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:59:30.523266 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:59:30.523965 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:59:30.526610 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:59:30.529865 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:59:30.529993 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:59:30.530148 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:59:30.530270 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:59:30.551379 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:59:30.551496 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:59:30.562968 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:59:30.563492 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:59:30.563746 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:59:30.712531 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:59:30.712658 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:03:30.712264 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000165s
	I1217 02:03:30.712292 1483412 kubeadm.go:319] 
	I1217 02:03:30.712354 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:03:30.712387 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:03:30.712502 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:03:30.712534 1483412 kubeadm.go:319] 
	I1217 02:03:30.712837 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:03:30.712881 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:03:30.712921 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:03:30.712927 1483412 kubeadm.go:319] 
	I1217 02:03:30.716667 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:03:30.717232 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:03:30.717376 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:03:30.717666 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:03:30.717678 1483412 kubeadm.go:319] 
	I1217 02:03:30.717747 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:03:30.717804 1483412 kubeadm.go:403] duration metric: took 8m6.069034531s to StartCluster
	I1217 02:03:30.717842 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:03:30.717911 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:03:30.742283 1483412 cri.go:89] found id: ""
	I1217 02:03:30.742310 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.742319 1483412 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:03:30.742326 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:03:30.742390 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:03:30.768192 1483412 cri.go:89] found id: ""
	I1217 02:03:30.768214 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.768223 1483412 logs.go:284] No container was found matching "etcd"
	I1217 02:03:30.768229 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:03:30.768289 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:03:30.792035 1483412 cri.go:89] found id: ""
	I1217 02:03:30.792057 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.792065 1483412 logs.go:284] No container was found matching "coredns"
	I1217 02:03:30.792071 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:03:30.792131 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:03:30.816804 1483412 cri.go:89] found id: ""
	I1217 02:03:30.816825 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.816833 1483412 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:03:30.816840 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:03:30.816896 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:03:30.841903 1483412 cri.go:89] found id: ""
	I1217 02:03:30.841925 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.841934 1483412 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:03:30.841940 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:03:30.841996 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:03:30.865941 1483412 cri.go:89] found id: ""
	I1217 02:03:30.866019 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.866042 1483412 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:03:30.866062 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:03:30.866154 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:03:30.890127 1483412 cri.go:89] found id: ""
	I1217 02:03:30.890151 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.890160 1483412 logs.go:284] No container was found matching "kindnet"
	I1217 02:03:30.890169 1483412 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:03:30.890180 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:03:30.956000 1483412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:03:30.956023 1483412 logs.go:123] Gathering logs for containerd ...
	I1217 02:03:30.956037 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:03:30.994676 1483412 logs.go:123] Gathering logs for container status ...
	I1217 02:03:30.994742 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:03:31.049766 1483412 logs.go:123] Gathering logs for kubelet ...
	I1217 02:03:31.049839 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:03:31.115613 1483412 logs.go:123] Gathering logs for dmesg ...
	I1217 02:03:31.115651 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:03:31.155185 1483412 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:03:31.155235 1483412 out.go:285] * 
	* 
	W1217 02:03:31.155286 1483412 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.155303 1483412 out.go:285] * 
	* 
	W1217 02:03:31.157437 1483412 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:03:31.162495 1483412 out.go:203] 
	W1217 02:03:31.166505 1483412 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.166566 1483412 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:03:31.166589 1483412 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:03:31.169784 1483412 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-456492
helpers_test.go:244: (dbg) docker inspect newest-cni-456492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	        "Created": "2025-12-17T01:55:16.478266179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1483846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:55:16.541817284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hosts",
	        "LogPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2-json.log",
	        "Name": "/newest-cni-456492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-456492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-456492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	                "LowerDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456492",
	                "Source": "/var/lib/docker/volumes/newest-cni-456492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456492",
	                "name.minikube.sigs.k8s.io": "newest-cni-456492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac9e3ec6660ef534c80ae9a62e4f8293e36270572d36ebc788f7c4f17de733d6",
	            "SandboxKey": "/var/run/docker/netns/ac9e3ec6660e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-456492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:75:ea:0b:2f:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78c732410c8ee8b3c147900aac111eb07f35c057f64efcecb5d20570fed785bc",
	                    "EndpointID": "b72674d5fca307f7a4a283c14f474eea6fa6df5ca3b748d3cb3d1f3fc33098ac",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456492",
	                        "72c4fe7eb784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 6 (310.374602ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:31.587189 1496176 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:03:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:03:06.446138 1494358 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:03:06.446331 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446344 1494358 out.go:374] Setting ErrFile to fd 2...
	I1217 02:03:06.446349 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446613 1494358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:03:06.446996 1494358 out.go:368] Setting JSON to false
	I1217 02:03:06.447949 1494358 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27937,"bootTime":1765909050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:03:06.448026 1494358 start.go:143] virtualization:  
	I1217 02:03:06.451183 1494358 out.go:179] * [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:03:06.455055 1494358 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:03:06.455196 1494358 notify.go:221] Checking for updates...
	I1217 02:03:06.461067 1494358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:03:06.464077 1494358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:06.467522 1494358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:03:06.470660 1494358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:03:06.473573 1494358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:03:06.476917 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:06.477577 1494358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:03:06.504584 1494358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:03:06.504713 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.568470 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.558714769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.568580 1494358 docker.go:319] overlay module found
	I1217 02:03:06.571663 1494358 out.go:179] * Using the docker driver based on existing profile
	I1217 02:03:06.574409 1494358 start.go:309] selected driver: docker
	I1217 02:03:06.574441 1494358 start.go:927] validating driver "docker" against &{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.574538 1494358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:03:06.575218 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.633705 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.62420129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.634037 1494358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:03:06.634074 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:06.634136 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:06.634181 1494358 start.go:353] cluster config:
	{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.637255 1494358 out.go:179] * Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	I1217 02:03:06.640178 1494358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:03:06.642991 1494358 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:03:06.645784 1494358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:03:06.645819 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:06.645947 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:06.646262 1494358 cache.go:107] acquiring lock: {Name:mk4890d4b47ae1973de2f5e1f0682feb41ee40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646336 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 02:03:06.646344 1494358 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.402µs
	I1217 02:03:06.646356 1494358 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 02:03:06.646368 1494358 cache.go:107] acquiring lock: {Name:mk966096fd85af29d80d70ba567f975fd1c8ab20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646398 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:03:06.646403 1494358 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.063µs
	I1217 02:03:06.646410 1494358 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646419 1494358 cache.go:107] acquiring lock: {Name:mkf4d095c495df29849f640a0755588b041f7643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646446 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:03:06.646451 1494358 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.19µs
	I1217 02:03:06.646458 1494358 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646468 1494358 cache.go:107] acquiring lock: {Name:mk1c22383e6094d20d836c3a904bbbe609668a02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646495 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:03:06.646500 1494358 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.599µs
	I1217 02:03:06.646506 1494358 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646514 1494358 cache.go:107] acquiring lock: {Name:mkc3683c3186a723f5651545e5f013a6bc8b78e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646539 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1217 02:03:06.646545 1494358 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.074µs
	I1217 02:03:06.646552 1494358 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646560 1494358 cache.go:107] acquiring lock: {Name:mk3a7027108fb6cda418f0aea932fdb404491198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646585 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 02:03:06.646589 1494358 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.105µs
	I1217 02:03:06.646596 1494358 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 02:03:06.646606 1494358 cache.go:107] acquiring lock: {Name:mkbcf0cf66af7f52acaeaf88186edd5961eb7fb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646635 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1217 02:03:06.646639 1494358 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 35.028µs
	I1217 02:03:06.646645 1494358 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1217 02:03:06.646653 1494358 cache.go:107] acquiring lock: {Name:mk85e5e85708e9527e64bdd95012aff390add343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646678 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 02:03:06.646682 1494358 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.031µs
	I1217 02:03:06.646688 1494358 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 02:03:06.646693 1494358 cache.go:87] Successfully saved all images to host disk.
	I1217 02:03:06.665484 1494358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:03:06.665506 1494358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:03:06.665526 1494358 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:03:06.665557 1494358 start.go:360] acquireMachinesLock for no-preload-178365: {Name:mkd4a1763d090ac24f95097d34ac035f597ec2f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.665618 1494358 start.go:364] duration metric: took 39.672µs to acquireMachinesLock for "no-preload-178365"
	I1217 02:03:06.665659 1494358 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:03:06.665665 1494358 fix.go:54] fixHost starting: 
	I1217 02:03:06.665948 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.681763 1494358 fix.go:112] recreateIfNeeded on no-preload-178365: state=Stopped err=<nil>
	W1217 02:03:06.681790 1494358 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:03:06.685089 1494358 out.go:252] * Restarting existing docker container for "no-preload-178365" ...
	I1217 02:03:06.685169 1494358 cli_runner.go:164] Run: docker start no-preload-178365
	I1217 02:03:06.958594 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.983526 1494358 kic.go:430] container "no-preload-178365" state is running.
	I1217 02:03:06.983925 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:07.006615 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:07.006877 1494358 machine.go:94] provisionDockerMachine start ...
	I1217 02:03:07.006940 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:07.027938 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:07.028270 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:07.028285 1494358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:03:07.028921 1494358 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42386->127.0.0.1:34254: read: connection reset by peer
	I1217 02:03:10.169609 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.169636 1494358 ubuntu.go:182] provisioning hostname "no-preload-178365"
	I1217 02:03:10.169740 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.194161 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.194504 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.194521 1494358 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-178365 && echo "no-preload-178365" | sudo tee /etc/hostname
	I1217 02:03:10.335145 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.335254 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.353261 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.353619 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.353703 1494358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178365/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:03:10.485869 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:03:10.485894 1494358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:03:10.485923 1494358 ubuntu.go:190] setting up certificates
	I1217 02:03:10.485939 1494358 provision.go:84] configureAuth start
	I1217 02:03:10.485997 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:10.502661 1494358 provision.go:143] copyHostCerts
	I1217 02:03:10.502746 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:03:10.502761 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:03:10.502842 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:03:10.502943 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:03:10.502955 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:03:10.502981 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:03:10.503037 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:03:10.503046 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:03:10.503070 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:03:10.503118 1494358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.no-preload-178365 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-178365]
	I1217 02:03:10.769670 1494358 provision.go:177] copyRemoteCerts
	I1217 02:03:10.769739 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:03:10.769777 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.789688 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:10.886311 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:03:10.907152 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:03:10.927302 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:03:10.945784 1494358 provision.go:87] duration metric: took 459.830227ms to configureAuth
	I1217 02:03:10.945813 1494358 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:03:10.946051 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:10.946065 1494358 machine.go:97] duration metric: took 3.939178962s to provisionDockerMachine
	I1217 02:03:10.946075 1494358 start.go:293] postStartSetup for "no-preload-178365" (driver="docker")
	I1217 02:03:10.946086 1494358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:03:10.946141 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:03:10.946189 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.963795 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.062181 1494358 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:03:11.066171 1494358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:03:11.066203 1494358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:03:11.066214 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:03:11.066271 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:03:11.066354 1494358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:03:11.066460 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:03:11.074455 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:11.096806 1494358 start.go:296] duration metric: took 150.715868ms for postStartSetup
	I1217 02:03:11.096935 1494358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:03:11.096985 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.115904 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.210914 1494358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:03:11.216447 1494358 fix.go:56] duration metric: took 4.550774061s for fixHost
	I1217 02:03:11.216474 1494358 start.go:83] releasing machines lock for "no-preload-178365", held for 4.550845758s
	I1217 02:03:11.216552 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:11.234013 1494358 ssh_runner.go:195] Run: cat /version.json
	I1217 02:03:11.234074 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.234105 1494358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:03:11.234160 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.254634 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.261745 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.349529 1494358 ssh_runner.go:195] Run: systemctl --version
	I1217 02:03:11.444567 1494358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:03:11.448907 1494358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:03:11.448999 1494358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:03:11.456651 1494358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:03:11.456676 1494358 start.go:496] detecting cgroup driver to use...
	I1217 02:03:11.456715 1494358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:03:11.456766 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:03:11.474180 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:03:11.487871 1494358 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:03:11.487945 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:03:11.503199 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:03:11.516179 1494358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:03:11.649581 1494358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:03:11.774192 1494358 docker.go:234] disabling docker service ...
	I1217 02:03:11.774263 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:03:11.789517 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:03:11.802804 1494358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:03:11.921518 1494358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:03:12.041333 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:03:12.054806 1494358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:03:12.068814 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:03:12.078910 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:03:12.088243 1494358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:03:12.088356 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:03:12.097152 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.106832 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:03:12.116858 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.126506 1494358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:03:12.134817 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:03:12.143713 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:03:12.152423 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:03:12.161395 1494358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:03:12.169023 1494358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:03:12.176758 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.290497 1494358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:03:12.413211 1494358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:03:12.413339 1494358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:03:12.419446 1494358 start.go:564] Will wait 60s for crictl version
	I1217 02:03:12.419560 1494358 ssh_runner.go:195] Run: which crictl
	I1217 02:03:12.423782 1494358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:03:12.453204 1494358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:03:12.453355 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.477890 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.502488 1494358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:03:12.505409 1494358 cli_runner.go:164] Run: docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:03:12.525803 1494358 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 02:03:12.529636 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.539141 1494358 kubeadm.go:884] updating cluster {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:03:12.539268 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:12.539323 1494358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:03:12.567893 1494358 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:03:12.567915 1494358 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:03:12.567927 1494358 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:03:12.568032 1494358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-178365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:03:12.568100 1494358 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:03:12.593237 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:12.593259 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:12.593281 1494358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:03:12.593303 1494358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178365 NodeName:no-preload-178365 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:03:12.593419 1494358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-178365"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:03:12.593487 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:03:12.601250 1494358 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:03:12.601320 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:03:12.608723 1494358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:03:12.621096 1494358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:03:12.634046 1494358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 02:03:12.646740 1494358 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:03:12.650274 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.660396 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.777431 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:12.794901 1494358 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365 for IP: 192.168.76.2
	I1217 02:03:12.794977 1494358 certs.go:195] generating shared ca certs ...
	I1217 02:03:12.795010 1494358 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:12.795186 1494358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:03:12.795275 1494358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:03:12.795305 1494358 certs.go:257] generating profile certs ...
	I1217 02:03:12.795455 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key
	I1217 02:03:12.795549 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2
	I1217 02:03:12.795620 1494358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key
	I1217 02:03:12.795764 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:03:12.795825 1494358 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:03:12.795852 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:03:12.795904 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:03:12.795962 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:03:12.796010 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:03:12.796087 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:12.796737 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:03:12.814980 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:03:12.832753 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:03:12.850216 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:03:12.868173 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:03:12.886289 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 02:03:12.903326 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:03:12.920371 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:03:12.940578 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:03:12.957601 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:03:12.974697 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:03:12.991288 1494358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:03:13.004811 1494358 ssh_runner.go:195] Run: openssl version
	I1217 02:03:13.011807 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.019338 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:03:13.027129 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030736 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030806 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.071860 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:03:13.079209 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.086171 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:03:13.093446 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.097994 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.098062 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.140311 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:03:13.148478 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.156400 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:03:13.164489 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168307 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168376 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.213768 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:03:13.221877 1494358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:03:13.225450 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:03:13.267131 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:03:13.308825 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:03:13.351204 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:03:13.393248 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:03:13.434439 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:03:13.475429 1494358 kubeadm.go:401] StartCluster: {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:13.475532 1494358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:03:13.475608 1494358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:03:13.504535 1494358 cri.go:89] found id: ""
	I1217 02:03:13.504615 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:03:13.512496 1494358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:03:13.512516 1494358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:03:13.512598 1494358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:03:13.520493 1494358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:03:13.520944 1494358 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.521050 1494358 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-178365" cluster setting kubeconfig missing "no-preload-178365" context setting]
	I1217 02:03:13.521320 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.522699 1494358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:03:13.530620 1494358 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 02:03:13.530655 1494358 kubeadm.go:602] duration metric: took 18.132356ms to restartPrimaryControlPlane
	I1217 02:03:13.530665 1494358 kubeadm.go:403] duration metric: took 55.248466ms to StartCluster
	I1217 02:03:13.530680 1494358 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.530739 1494358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.531369 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.531580 1494358 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:03:13.531879 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:13.531927 1494358 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:03:13.531992 1494358 addons.go:70] Setting storage-provisioner=true in profile "no-preload-178365"
	I1217 02:03:13.532007 1494358 addons.go:239] Setting addon storage-provisioner=true in "no-preload-178365"
	I1217 02:03:13.532031 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.532492 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.532869 1494358 addons.go:70] Setting dashboard=true in profile "no-preload-178365"
	I1217 02:03:13.532892 1494358 addons.go:239] Setting addon dashboard=true in "no-preload-178365"
	W1217 02:03:13.532899 1494358 addons.go:248] addon dashboard should already be in state true
	I1217 02:03:13.532921 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.533338 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.534314 1494358 addons.go:70] Setting default-storageclass=true in profile "no-preload-178365"
	I1217 02:03:13.534373 1494358 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-178365"
	I1217 02:03:13.534686 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.538786 1494358 out.go:179] * Verifying Kubernetes components...
	I1217 02:03:13.541864 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:13.565785 1494358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:03:13.568681 1494358 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.568703 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:03:13.568768 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.578846 1494358 addons.go:239] Setting addon default-storageclass=true in "no-preload-178365"
	I1217 02:03:13.578886 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.579340 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.579557 1494358 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:03:13.582535 1494358 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:03:13.585382 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:03:13.585433 1494358 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:03:13.585542 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.608796 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.639282 1494358 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.639307 1494358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:03:13.639371 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.653415 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.673307 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.775641 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.801572 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:13.824171 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:03:13.824193 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:03:13.841637 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:03:13.841671 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:03:13.855261 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:03:13.855283 1494358 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:03:13.874375 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:03:13.874398 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1217 02:03:13.875947 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.875994 1494358 retry.go:31] will retry after 288.181294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.892373 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.907211 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:03:13.907237 1494358 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:03:13.935844 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:03:13.935871 1494358 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:03:13.961470 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:03:13.961495 1494358 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:03:13.976000 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:03:13.976025 1494358 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:03:13.992266 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:13.992291 1494358 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:03:14.009756 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:14.164994 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:14.633552 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633588 1494358 retry.go:31] will retry after 357.626005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633797 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633814 1494358 retry.go:31] will retry after 154.442663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633867 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633885 1494358 retry.go:31] will retry after 536.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633975 1494358 node_ready.go:35] waiting up to 6m0s for node "no-preload-178365" to be "Ready" ...
	I1217 02:03:14.788822 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:14.850646 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.850682 1494358 retry.go:31] will retry after 194.97222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.992099 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:15.046507 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.089856 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.089896 1494358 retry.go:31] will retry after 200.825401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:15.123044 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.123092 1494358 retry.go:31] will retry after 471.273084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.171850 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:15.233255 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.233288 1494358 retry.go:31] will retry after 740.372196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.291633 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:15.354957 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.354993 1494358 retry.go:31] will retry after 685.879549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.595477 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.661175 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.661206 1494358 retry.go:31] will retry after 918.180528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.974527 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.041010 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:16.041109 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.041153 1494358 retry.go:31] will retry after 922.351729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:16.101618 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.101745 1494358 retry.go:31] will retry after 895.690357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.580236 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:16.635003 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:16.644295 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.644331 1494358 retry.go:31] will retry after 1.757458355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.963859 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.998199 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.029017 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.029053 1494358 retry.go:31] will retry after 1.200975191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:17.065693 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.065740 1494358 retry.go:31] will retry after 733.467842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.799468 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.857813 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.857844 1494358 retry.go:31] will retry after 1.598089082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.230995 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:18.288826 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.288856 1494358 retry.go:31] will retry after 1.072359143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.402269 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:18.499311 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.499346 1494358 retry.go:31] will retry after 1.974986181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:19.135143 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:19.361610 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:19.424580 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.424614 1494358 retry.go:31] will retry after 2.619930526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.456891 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:19.529540 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.529572 1494358 retry.go:31] will retry after 4.103816404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.475130 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:20.538062 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.538103 1494358 retry.go:31] will retry after 4.176264138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:21.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:22.045549 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:22.113264 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:22.113297 1494358 retry.go:31] will retry after 6.243728004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.634510 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:23.724320 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.724355 1494358 retry.go:31] will retry after 2.344494398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:24.135189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:24.715564 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:24.778897 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:24.778930 1494358 retry.go:31] will retry after 6.21195427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.069135 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:26.129417 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.129453 1494358 retry.go:31] will retry after 7.88915894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:30.712264 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000165s
	I1217 02:03:30.712292 1483412 kubeadm.go:319] 
	I1217 02:03:30.712354 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:03:30.712387 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:03:30.712502 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:03:30.712534 1483412 kubeadm.go:319] 
	I1217 02:03:30.712837 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:03:30.712881 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:03:30.712921 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:03:30.712927 1483412 kubeadm.go:319] 
	I1217 02:03:30.716667 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:03:30.717232 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:03:30.717376 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:03:30.717666 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:03:30.717678 1483412 kubeadm.go:319] 
	I1217 02:03:30.717747 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:03:30.717804 1483412 kubeadm.go:403] duration metric: took 8m6.069034531s to StartCluster
	I1217 02:03:30.717842 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:03:30.717911 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:03:30.742283 1483412 cri.go:89] found id: ""
	I1217 02:03:30.742310 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.742319 1483412 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:03:30.742326 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:03:30.742390 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:03:30.768192 1483412 cri.go:89] found id: ""
	I1217 02:03:30.768214 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.768223 1483412 logs.go:284] No container was found matching "etcd"
	I1217 02:03:30.768229 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:03:30.768289 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:03:30.792035 1483412 cri.go:89] found id: ""
	I1217 02:03:30.792057 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.792065 1483412 logs.go:284] No container was found matching "coredns"
	I1217 02:03:30.792071 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:03:30.792131 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:03:30.816804 1483412 cri.go:89] found id: ""
	I1217 02:03:30.816825 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.816833 1483412 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:03:30.816840 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:03:30.816896 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:03:30.841903 1483412 cri.go:89] found id: ""
	I1217 02:03:30.841925 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.841934 1483412 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:03:30.841940 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:03:30.841996 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:03:30.865941 1483412 cri.go:89] found id: ""
	I1217 02:03:30.866019 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.866042 1483412 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:03:30.866062 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:03:30.866154 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:03:30.890127 1483412 cri.go:89] found id: ""
	I1217 02:03:30.890151 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.890160 1483412 logs.go:284] No container was found matching "kindnet"
	I1217 02:03:30.890169 1483412 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:03:30.890180 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:03:30.956000 1483412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:03:30.956023 1483412 logs.go:123] Gathering logs for containerd ...
	I1217 02:03:30.956037 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:03:30.994676 1483412 logs.go:123] Gathering logs for container status ...
	I1217 02:03:30.994742 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:03:31.049766 1483412 logs.go:123] Gathering logs for kubelet ...
	I1217 02:03:31.049839 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:03:31.115613 1483412 logs.go:123] Gathering logs for dmesg ...
	I1217 02:03:31.115651 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:03:31.155185 1483412 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:03:31.155235 1483412 out.go:285] * 
	W1217 02:03:31.155286 1483412 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.155303 1483412 out.go:285] * 
	W1217 02:03:31.157437 1483412 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:03:31.162495 1483412 out.go:203] 
	W1217 02:03:31.166505 1483412 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.166566 1483412 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:03:31.166589 1483412 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:03:31.169784 1483412 out.go:203] 
	W1217 02:03:26.635049 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:28.357601 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:28.414285 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:28.414314 1494358 retry.go:31] will retry after 8.141385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:29.135171 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:30.991983 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:31.086857 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:31.086888 1494358 retry.go:31] will retry after 8.346677944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682286175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682307123Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682371829Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682402894Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682419543Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682442197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682466124Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682483519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682502038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682554362Z" level=info msg="Connect containerd service"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682995064Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.683809119Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.695949932Z" level=info msg="Start subscribing containerd event"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696048682Z" level=info msg="Start recovering state"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696295371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696416242Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.735993632Z" level=info msg="Start event monitor"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736047688Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736057895Z" level=info msg="Start streaming server"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736067077Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736078375Z" level=info msg="runtime interface starting up..."
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736095212Z" level=info msg="starting plugins..."
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736110121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 01:55:22 newest-cni-456492 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.737713198Z" level=info msg="containerd successfully booted in 0.087876s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:32.238264    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:32.238667    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:32.240230    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:32.240540    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:32.242201    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:03:32 up  7:46,  0 user,  load average: 1.35, 1.11, 1.64
	Linux newest-cni-456492 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:03:28 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:29 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 02:03:29 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:29 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:29 newest-cni-456492 kubelet[4786]: E1217 02:03:29.679504    4786 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:29 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:29 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:30 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 02:03:30 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:30 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:30 newest-cni-456492 kubelet[4792]: E1217 02:03:30.427992    4792 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:30 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:30 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:31 newest-cni-456492 kubelet[4875]: E1217 02:03:31.240932    4875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:31 newest-cni-456492 kubelet[4904]: E1217 02:03:31.952164    4904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:31 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 6 (375.382851ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:32.760746 1496399 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-456492" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-178365 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-178365 create -f testdata/busybox.yaml: exit status 1 (69.771262ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-178365" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-178365 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1475961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:53:10.944588207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbc378cb18c4db6321bba9064bec37ae2907203c00dcd497af9edc9b3f71361f",
	            "SandboxKey": "/var/run/docker/netns/dbc378cb18c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:a8:78:cd:87:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "46c074d2d98270a72981dceacb4c45383893c762846fd2a67a1498e3670844fd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 6 (297.006725ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:39.040685 1491835 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-069646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:51 UTC │
	│ start   │ -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:52 UTC │
	│ image   │ old-k8s-version-859530 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ pause   │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ unpause │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:55:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:55:11.587586 1483412 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:11.587793 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.587821 1483412 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:11.587840 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.588238 1483412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:55:11.589101 1483412 out.go:368] Setting JSON to false
	I1217 01:55:11.589983 1483412 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27462,"bootTime":1765909050,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:55:11.590050 1483412 start.go:143] virtualization:  
	I1217 01:55:11.594008 1483412 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:55:11.598404 1483412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:55:11.598486 1483412 notify.go:221] Checking for updates...
	I1217 01:55:11.605445 1483412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:55:11.608601 1483412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:55:11.611778 1483412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:55:11.614850 1483412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:55:11.617933 1483412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:55:11.621419 1483412 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:11.621527 1483412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:55:11.640802 1483412 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:55:11.640922 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.701423 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.691901377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.701533 1483412 docker.go:319] overlay module found
	I1217 01:55:11.704806 1483412 out.go:179] * Using the docker driver based on user configuration
	I1217 01:55:11.707752 1483412 start.go:309] selected driver: docker
	I1217 01:55:11.707769 1483412 start.go:927] validating driver "docker" against <nil>
	I1217 01:55:11.707784 1483412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:55:11.708522 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.771255 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.762421806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.771409 1483412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:55:11.771445 1483412 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:55:11.771663 1483412 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:55:11.774669 1483412 out.go:179] * Using Docker driver with root privileges
	I1217 01:55:11.777523 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:11.777592 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:11.777607 1483412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:55:11.777735 1483412 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:11.780890 1483412 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 01:55:11.783718 1483412 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:55:11.786584 1483412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:55:11.789380 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:11.789429 1483412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:55:11.789441 1483412 cache.go:65] Caching tarball of preloaded images
	I1217 01:55:11.789467 1483412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:55:11.789532 1483412 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:55:11.789541 1483412 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:55:11.789677 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:11.789696 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json: {Name:mk81bb26d654057444403d949cc7b962f958f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:11.808673 1483412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:55:11.808698 1483412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:55:11.808713 1483412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:55:11.808743 1483412 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:55:11.808846 1483412 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "newest-cni-456492"
	I1217 01:55:11.808876 1483412 start.go:93] Provisioning new machine with config: &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:55:11.808947 1483412 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:55:11.812418 1483412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:55:11.812643 1483412 start.go:159] libmachine.API.Create for "newest-cni-456492" (driver="docker")
	I1217 01:55:11.812678 1483412 client.go:173] LocalClient.Create starting
	I1217 01:55:11.812766 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:55:11.812806 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812824 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.812874 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:55:11.812896 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812911 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.813288 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:55:11.828937 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:55:11.829030 1483412 network_create.go:284] running [docker network inspect newest-cni-456492] to gather additional debugging logs...
	I1217 01:55:11.829050 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492
	W1217 01:55:11.845086 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 returned with exit code 1
	I1217 01:55:11.845116 1483412 network_create.go:287] error running [docker network inspect newest-cni-456492]: docker network inspect newest-cni-456492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-456492 not found
	I1217 01:55:11.845144 1483412 network_create.go:289] output of [docker network inspect newest-cni-456492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-456492 not found
	
	** /stderr **
	I1217 01:55:11.845236 1483412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:11.862130 1483412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:55:11.862454 1483412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:55:11.862764 1483412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:55:11.862966 1483412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 01:55:11.863436 1483412 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bb4b0}
	I1217 01:55:11.863452 1483412 network_create.go:124] attempt to create docker network newest-cni-456492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:55:11.863519 1483412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-456492 newest-cni-456492
	I1217 01:55:11.939566 1483412 network_create.go:108] docker network newest-cni-456492 192.168.85.0/24 created
	I1217 01:55:11.939593 1483412 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-456492" container
	I1217 01:55:11.939681 1483412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:55:11.956827 1483412 cli_runner.go:164] Run: docker volume create newest-cni-456492 --label name.minikube.sigs.k8s.io=newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:55:11.974528 1483412 oci.go:103] Successfully created a docker volume newest-cni-456492
	I1217 01:55:11.974628 1483412 cli_runner.go:164] Run: docker run --rm --name newest-cni-456492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --entrypoint /usr/bin/test -v newest-cni-456492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:55:12.497008 1483412 oci.go:107] Successfully prepared a docker volume newest-cni-456492
	I1217 01:55:12.497078 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:12.497091 1483412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:55:12.497172 1483412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:55:16.389962 1483412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.892749984s)
	I1217 01:55:16.389996 1483412 kic.go:203] duration metric: took 3.892902757s to extract preloaded images to volume ...
	W1217 01:55:16.390136 1483412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:55:16.390261 1483412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:55:16.462546 1483412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-456492 --name newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-456492 --network newest-cni-456492 --ip 192.168.85.2 --volume newest-cni-456492:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:55:16.772361 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Running}}
	I1217 01:55:16.793387 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:16.820136 1483412 cli_runner.go:164] Run: docker exec newest-cni-456492 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:55:16.881491 1483412 oci.go:144] the created container "newest-cni-456492" has a running status.
	I1217 01:55:16.881521 1483412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa...
	I1217 01:55:17.289070 1483412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:55:17.323822 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.352076 1483412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:55:17.352103 1483412 kic_runner.go:114] Args: [docker exec --privileged newest-cni-456492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:55:17.412601 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.440021 1483412 machine.go:94] provisionDockerMachine start ...
	I1217 01:55:17.440112 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:17.465337 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:17.465706 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:17.465717 1483412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:55:17.466482 1483412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45132->127.0.0.1:34249: read: connection reset by peer
	I1217 01:55:20.597038 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.597109 1483412 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 01:55:20.597212 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.614509 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.614828 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.614859 1483412 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 01:55:20.756257 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.756341 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.774598 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.774975 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.774999 1483412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:55:20.905912 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:55:20.905939 1483412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:55:20.905956 1483412 ubuntu.go:190] setting up certificates
	I1217 01:55:20.905965 1483412 provision.go:84] configureAuth start
	I1217 01:55:20.906024 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:20.923247 1483412 provision.go:143] copyHostCerts
	I1217 01:55:20.923326 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:55:20.923339 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:55:20.923416 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:55:20.923533 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:55:20.923544 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:55:20.923576 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:55:20.923649 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:55:20.923659 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:55:20.923689 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:55:20.923744 1483412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 01:55:21.003325 1483412 provision.go:177] copyRemoteCerts
	I1217 01:55:21.003406 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:55:21.003466 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.021337 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.118292 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:55:21.145239 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:55:21.164973 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:55:21.184653 1483412 provision.go:87] duration metric: took 278.664546ms to configureAuth
	I1217 01:55:21.184681 1483412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:55:21.184876 1483412 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:21.184890 1483412 machine.go:97] duration metric: took 3.744849982s to provisionDockerMachine
	I1217 01:55:21.184897 1483412 client.go:176] duration metric: took 9.372209957s to LocalClient.Create
	I1217 01:55:21.184913 1483412 start.go:167] duration metric: took 9.372271349s to libmachine.API.Create "newest-cni-456492"
	I1217 01:55:21.184924 1483412 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 01:55:21.184935 1483412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:55:21.184993 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:55:21.185038 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.202893 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.301704 1483412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:55:21.305094 1483412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:55:21.305120 1483412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:55:21.305132 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:55:21.305183 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:55:21.305257 1483412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:55:21.305367 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:55:21.313575 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:21.332690 1483412 start.go:296] duration metric: took 147.751178ms for postStartSetup
	I1217 01:55:21.333071 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.349950 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:21.350233 1483412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:55:21.350284 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.367086 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.458630 1483412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:55:21.463314 1483412 start.go:128] duration metric: took 9.65435334s to createHost
	I1217 01:55:21.463343 1483412 start.go:83] releasing machines lock for "newest-cni-456492", held for 9.654483449s
	I1217 01:55:21.463413 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.480150 1483412 ssh_runner.go:195] Run: cat /version.json
	I1217 01:55:21.480207 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.480490 1483412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:55:21.480549 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.503493 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.506377 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.593349 1483412 ssh_runner.go:195] Run: systemctl --version
	I1217 01:55:21.687982 1483412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:55:21.692115 1483412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:55:21.692182 1483412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:55:21.718403 1483412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:55:21.718427 1483412 start.go:496] detecting cgroup driver to use...
	I1217 01:55:21.718460 1483412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:55:21.718523 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:55:21.733259 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:55:21.746485 1483412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:55:21.746571 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:55:21.764553 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:55:21.782958 1483412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:55:21.908620 1483412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:55:22.026459 1483412 docker.go:234] disabling docker service ...
	I1217 01:55:22.026538 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:55:22.052603 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:55:22.068218 1483412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:55:22.193394 1483412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:55:22.321475 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:55:22.334922 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:55:22.349881 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:55:22.359035 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:55:22.368328 1483412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:55:22.368453 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:55:22.377717 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.387475 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:55:22.396690 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.405767 1483412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:55:22.414387 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:55:22.423447 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:55:22.432777 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:55:22.442244 1483412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:55:22.450102 1483412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:55:22.457779 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:22.584574 1483412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:55:22.739170 1483412 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:55:22.739315 1483412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:55:22.743653 1483412 start.go:564] Will wait 60s for crictl version
	I1217 01:55:22.743721 1483412 ssh_runner.go:195] Run: which crictl
	I1217 01:55:22.747627 1483412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:55:22.774963 1483412 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:55:22.775088 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.795646 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.822177 1483412 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:55:22.825213 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:22.841339 1483412 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 01:55:22.845097 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:22.857844 1483412 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:55:22.860750 1483412 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:55:22.860891 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:22.860986 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.887811 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.887838 1483412 containerd.go:534] Images already preloaded, skipping extraction
	I1217 01:55:22.887921 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.916774 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.916798 1483412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:55:22.916806 1483412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:55:22.916901 1483412 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:55:22.916973 1483412 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:55:22.941450 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:22.941474 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:22.941497 1483412 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:55:22.941521 1483412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:55:22.941668 1483412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:55:22.941741 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:55:22.949446 1483412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:55:22.949536 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:55:22.957307 1483412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:55:22.970080 1483412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:55:22.983144 1483412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 01:55:22.996455 1483412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:55:23.000264 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:23.011956 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:23.132195 1483412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:55:23.153898 1483412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 01:55:23.153924 1483412 certs.go:195] generating shared ca certs ...
	I1217 01:55:23.153953 1483412 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.154120 1483412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:55:23.154167 1483412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:55:23.154179 1483412 certs.go:257] generating profile certs ...
	I1217 01:55:23.154252 1483412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 01:55:23.154267 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt with IP's: []
	I1217 01:55:23.536556 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt ...
	I1217 01:55:23.536598 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt: {Name:mk5f328f97a5398eaf8448e799e55e14628a21cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536799 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key ...
	I1217 01:55:23.536813 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key: {Name:mk204e71ac4a7537095f4378fcacae497aae9e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536900 1483412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 01:55:23.536919 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:55:23.700587 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d ...
	I1217 01:55:23.700617 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d: {Name:mk2ff6ffd7e0f9e8790c41f75004f783e2e2cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700810 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d ...
	I1217 01:55:23.700838 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d: {Name:mk4a8fd878c1db6fa4ca6d31ac312311a9e574fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700939 1483412 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt
	I1217 01:55:23.701025 1483412 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key
	I1217 01:55:23.701086 1483412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 01:55:23.701104 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt with IP's: []
	I1217 01:55:24.186185 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt ...
	I1217 01:55:24.186218 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt: {Name:mk4e097689774236e217287c4769a9bc6b62d157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186434 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key ...
	I1217 01:55:24.186460 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key: {Name:mk9311419a1f9f3ab4e171bbfc5a685160d56892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186687 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:55:24.186737 1483412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:55:24.186753 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:55:24.186781 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:55:24.186819 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:55:24.186847 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:55:24.186901 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:24.187489 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:55:24.207140 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:55:24.225813 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:55:24.244898 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:55:24.264402 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:55:24.283038 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:55:24.302197 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:55:24.320347 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:55:24.339022 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:55:24.357411 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:55:24.375801 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:55:24.394312 1483412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:55:24.407959 1483412 ssh_runner.go:195] Run: openssl version
	I1217 01:55:24.414593 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.422149 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:55:24.429938 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433843 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433913 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.475535 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.483235 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.490706 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.498434 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:55:24.506686 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510403 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510492 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.551573 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:55:24.559261 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:55:24.566821 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.574182 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:55:24.581528 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585424 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585508 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.628267 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:55:24.636095 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:55:24.643970 1483412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:55:24.648671 1483412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:55:24.648775 1483412 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:24.648946 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:55:24.649043 1483412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:55:24.677969 1483412 cri.go:89] found id: ""
	I1217 01:55:24.678093 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:55:24.688459 1483412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:55:24.696458 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:55:24.696550 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:55:24.704828 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:55:24.704848 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:55:24.704931 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:55:24.712883 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:55:24.712983 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:55:24.720826 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:55:24.728999 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:55:24.729100 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:55:24.736825 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.744799 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:55:24.744867 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.752477 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:55:24.760816 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:55:24.760931 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:55:24.768678 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:55:24.810821 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:55:24.811126 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:55:24.896174 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:55:24.896294 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:55:24.896359 1483412 kubeadm.go:319] OS: Linux
	I1217 01:55:24.896426 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:55:24.896502 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:55:24.896566 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:55:24.896639 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:55:24.896704 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:55:24.896779 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:55:24.896863 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:55:24.896941 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:55:24.897010 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:55:24.971043 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:55:24.971234 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:55:24.971378 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:55:24.982063 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:55:24.988218 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:55:24.988318 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:55:24.988395 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:55:25.419455 1483412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:55:25.522339 1483412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:55:25.598229 1483412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:55:25.671518 1483412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:55:25.854804 1483412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:55:25.855019 1483412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.196066 1483412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:55:26.196425 1483412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.785707 1483412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:55:26.841556 1483412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:55:27.019008 1483412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:55:27.019328 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:55:27.196727 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:55:27.751450 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:55:27.908167 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:55:28.296645 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:55:28.549325 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:55:28.550095 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:55:28.554755 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:55:28.558438 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:55:28.558547 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:55:28.558629 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:55:28.558695 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:55:28.574196 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:55:28.574560 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:55:28.582119 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:55:28.582467 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:55:28.582759 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:55:28.732745 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:55:28.732882 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.124748 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:57:34.124781 1475658 kubeadm.go:319] 
	I1217 01:57:34.124851 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:57:34.130032 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.130094 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.130184 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.130239 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.130274 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.130319 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.130369 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.130417 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.130466 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.130513 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.130562 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.130607 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.130655 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.130701 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.130774 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.130869 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.130959 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.131021 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.134054 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.134142 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.134206 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.134273 1475658 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:57:34.134329 1475658 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:57:34.134389 1475658 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:57:34.134439 1475658 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:57:34.134492 1475658 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:57:34.134614 1475658 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134712 1475658 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:57:34.134885 1475658 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134988 1475658 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:57:34.135097 1475658 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:57:34.135183 1475658 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:57:34.135283 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:34.135344 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:34.135402 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:34.135459 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:34.135521 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:34.135575 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:34.135655 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:34.135721 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:34.138598 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:34.138713 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:34.138799 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:34.138871 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:34.138982 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:34.139083 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:34.139203 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:34.139301 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:34.139344 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:34.139483 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:34.139594 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.139663 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005993508s
	I1217 01:57:34.139667 1475658 kubeadm.go:319] 
	I1217 01:57:34.139728 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:57:34.139770 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:57:34.139882 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:57:34.139887 1475658 kubeadm.go:319] 
	I1217 01:57:34.139998 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:57:34.140032 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:57:34.140065 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:57:34.140174 1475658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:57:34.140253 1475658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:57:34.140626 1475658 kubeadm.go:319] 
	I1217 01:57:34.576208 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:57:34.589972 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:34.590043 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:34.598643 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:34.598705 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:34.598780 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:34.606738 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:34.606852 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:34.614781 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:34.622706 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:34.622772 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:34.630400 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.638446 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:34.638512 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.646373 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:34.654277 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:34.654364 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:34.662056 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:34.702011 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.702113 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.773814 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.773913 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.773969 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.774045 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.774109 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.774187 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.774266 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.774339 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.774416 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.774474 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.774547 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.774609 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.846561 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.846676 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.846767 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.854122 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.857357 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.857482 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.857567 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.857679 1475658 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:57:34.857759 1475658 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:57:34.857854 1475658 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:57:34.857924 1475658 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:57:34.858004 1475658 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:57:34.858087 1475658 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:57:34.858187 1475658 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:57:34.858274 1475658 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:57:34.858318 1475658 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:57:34.858386 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:35.122967 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:35.269702 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:35.473145 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:36.090186 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:36.438081 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:36.439114 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:36.441843 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:36.444972 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:36.445093 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:36.445187 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:36.447586 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:36.469683 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:36.469812 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:36.477712 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:36.478146 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:36.478375 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:36.619400 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:36.619522 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:59:28.732281 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001211696s
	I1217 01:59:28.732307 1483412 kubeadm.go:319] 
	I1217 01:59:28.732365 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:59:28.732399 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:59:28.732504 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:59:28.732508 1483412 kubeadm.go:319] 
	I1217 01:59:28.732613 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:59:28.732645 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:59:28.732676 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:59:28.732680 1483412 kubeadm.go:319] 
	I1217 01:59:28.737697 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:59:28.738161 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:59:28.738281 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:59:28.738538 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:59:28.738549 1483412 kubeadm.go:319] 
	I1217 01:59:28.738623 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:59:28.738846 1483412 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:59:28.738945 1483412 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:59:29.148897 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:59:29.163236 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:59:29.163322 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:59:29.173290 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:59:29.173315 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:59:29.173378 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:59:29.189171 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:59:29.189238 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:59:29.198769 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:59:29.206895 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:59:29.206960 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:59:29.214464 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.222503 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:59:29.222596 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.230032 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:59:29.237621 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:59:29.237713 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:59:29.244936 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:59:29.283887 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:59:29.284148 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:59:29.355640 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:59:29.355800 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:59:29.355878 1483412 kubeadm.go:319] OS: Linux
	I1217 01:59:29.355962 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:59:29.356047 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:59:29.356127 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:59:29.356205 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:59:29.356285 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:59:29.356371 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:59:29.356449 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:59:29.356530 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:59:29.356609 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:59:29.424082 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:59:29.424247 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:59:29.424404 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:59:29.430675 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:59:29.436331 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:59:29.436427 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:59:29.436498 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:59:29.436614 1483412 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:59:29.436760 1483412 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:59:29.436868 1483412 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:59:29.436955 1483412 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:59:29.437066 1483412 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:59:29.437169 1483412 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:59:29.437294 1483412 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:59:29.437455 1483412 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:59:29.437914 1483412 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:59:29.438023 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:59:29.643674 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:59:29.811188 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:59:30.039930 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:59:30.429283 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:59:30.523266 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:59:30.523965 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:59:30.526610 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:59:30.529865 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:59:30.529993 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:59:30.530148 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:59:30.530270 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:59:30.551379 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:59:30.551496 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:59:30.562968 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:59:30.563492 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:59:30.563746 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:59:30.712531 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:59:30.712658 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:36.620744 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001388969s
	I1217 02:01:36.620785 1475658 kubeadm.go:319] 
	I1217 02:01:36.620840 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:36.620873 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:36.620977 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:36.620988 1475658 kubeadm.go:319] 
	I1217 02:01:36.621087 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:36.621122 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:36.621154 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:36.621162 1475658 kubeadm.go:319] 
	I1217 02:01:36.624858 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:01:36.625354 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:36.625468 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:36.625731 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:36.625742 1475658 kubeadm.go:319] 
	I1217 02:01:36.625808 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:36.625889 1475658 kubeadm.go:403] duration metric: took 8m7.357719708s to StartCluster
	I1217 02:01:36.625944 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:36.626024 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:36.652571 1475658 cri.go:89] found id: ""
	I1217 02:01:36.652609 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.652624 1475658 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:36.652631 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:01:36.652704 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:36.678690 1475658 cri.go:89] found id: ""
	I1217 02:01:36.678713 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.678721 1475658 logs.go:284] No container was found matching "etcd"
	I1217 02:01:36.678728 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:01:36.678789 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:36.705351 1475658 cri.go:89] found id: ""
	I1217 02:01:36.705375 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.705383 1475658 logs.go:284] No container was found matching "coredns"
	I1217 02:01:36.705389 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:36.705452 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:36.730965 1475658 cri.go:89] found id: ""
	I1217 02:01:36.730992 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.731001 1475658 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:36.731008 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:36.731070 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:36.760345 1475658 cri.go:89] found id: ""
	I1217 02:01:36.760370 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.760379 1475658 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:36.760385 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:36.760446 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:36.785560 1475658 cri.go:89] found id: ""
	I1217 02:01:36.785583 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.785592 1475658 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:36.785599 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:36.785697 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:36.814303 1475658 cri.go:89] found id: ""
	I1217 02:01:36.814328 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.814337 1475658 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:36.814347 1475658 logs.go:123] Gathering logs for container status ...
	I1217 02:01:36.814359 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:36.842640 1475658 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:36.842668 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:36.901858 1475658 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:36.901897 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:36.918036 1475658 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:36.918069 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:36.984314 1475658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:36.984350 1475658 logs.go:123] Gathering logs for containerd ...
	I1217 02:01:36.984362 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:01:37.028786 1475658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:01:37.028860 1475658 out.go:285] * 
	W1217 02:01:37.028917 1475658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.028931 1475658 out.go:285] * 
	W1217 02:01:37.031068 1475658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:01:37.037220 1475658 out.go:203] 
	W1217 02:01:37.040930 1475658 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.041001 1475658 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:01:37.041022 1475658 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:01:37.044273 1475658 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:53:20 no-preload-178365 containerd[756]: time="2025-12-17T01:53:20.013986261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.083205389Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.085894407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.093386032Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.094057489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.042937201Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.045143048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058075151Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058727605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.132008848Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.135132972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.143661850Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.144058260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.267145399Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.269771295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.277531008Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.278492420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.715372635Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.717609801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.726893123Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.727845953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.108154182Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.111113669Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120555130Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120954125Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:39.693127    5672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:39.693839    5672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:39.696489    5672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:39.697076    5672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:39.698766    5672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:01:39 up  7:44,  0 user,  load average: 0.14, 0.96, 1.67
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:01:36 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 kubelet[5433]: E1217 02:01:37.203785    5433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:37 no-preload-178365 kubelet[5491]: E1217 02:01:37.940889    5491 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:38 no-preload-178365 kubelet[5564]: E1217 02:01:38.622753    5564 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:39 no-preload-178365 kubelet[5609]: E1217 02:01:39.473532    5609 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 6 (362.676948ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:40.174382 1492055 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1475961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:53:10.944588207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbc378cb18c4db6321bba9064bec37ae2907203c00dcd497af9edc9b3f71361f",
	            "SandboxKey": "/var/run/docker/netns/dbc378cb18c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:a8:78:cd:87:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "46c074d2d98270a72981dceacb4c45383893c762846fd2a67a1498e3670844fd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 6 (326.903419ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:40.537741 1492143 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-069646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:51 UTC │
	│ start   │ -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:52 UTC │
	│ image   │ old-k8s-version-859530 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ pause   │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ unpause │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:55:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:55:11.587586 1483412 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:11.587793 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.587821 1483412 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:11.587840 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.588238 1483412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:55:11.589101 1483412 out.go:368] Setting JSON to false
	I1217 01:55:11.589983 1483412 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27462,"bootTime":1765909050,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:55:11.590050 1483412 start.go:143] virtualization:  
	I1217 01:55:11.594008 1483412 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:55:11.598404 1483412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:55:11.598486 1483412 notify.go:221] Checking for updates...
	I1217 01:55:11.605445 1483412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:55:11.608601 1483412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:55:11.611778 1483412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:55:11.614850 1483412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:55:11.617933 1483412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:55:11.621419 1483412 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:11.621527 1483412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:55:11.640802 1483412 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:55:11.640922 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.701423 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.691901377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.701533 1483412 docker.go:319] overlay module found
	I1217 01:55:11.704806 1483412 out.go:179] * Using the docker driver based on user configuration
	I1217 01:55:11.707752 1483412 start.go:309] selected driver: docker
	I1217 01:55:11.707769 1483412 start.go:927] validating driver "docker" against <nil>
	I1217 01:55:11.707784 1483412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:55:11.708522 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.771255 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.762421806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.771409 1483412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:55:11.771445 1483412 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:55:11.771663 1483412 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:55:11.774669 1483412 out.go:179] * Using Docker driver with root privileges
	I1217 01:55:11.777523 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:11.777592 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:11.777607 1483412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:55:11.777735 1483412 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:11.780890 1483412 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 01:55:11.783718 1483412 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:55:11.786584 1483412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:55:11.789380 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:11.789429 1483412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:55:11.789441 1483412 cache.go:65] Caching tarball of preloaded images
	I1217 01:55:11.789467 1483412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:55:11.789532 1483412 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:55:11.789541 1483412 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:55:11.789677 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:11.789696 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json: {Name:mk81bb26d654057444403d949cc7b962f958f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:11.808673 1483412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:55:11.808698 1483412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:55:11.808713 1483412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:55:11.808743 1483412 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:55:11.808846 1483412 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "newest-cni-456492"
	I1217 01:55:11.808876 1483412 start.go:93] Provisioning new machine with config: &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:55:11.808947 1483412 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:55:11.812418 1483412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:55:11.812643 1483412 start.go:159] libmachine.API.Create for "newest-cni-456492" (driver="docker")
	I1217 01:55:11.812678 1483412 client.go:173] LocalClient.Create starting
	I1217 01:55:11.812766 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:55:11.812806 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812824 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.812874 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:55:11.812896 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812911 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.813288 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:55:11.828937 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:55:11.829030 1483412 network_create.go:284] running [docker network inspect newest-cni-456492] to gather additional debugging logs...
	I1217 01:55:11.829050 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492
	W1217 01:55:11.845086 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 returned with exit code 1
	I1217 01:55:11.845116 1483412 network_create.go:287] error running [docker network inspect newest-cni-456492]: docker network inspect newest-cni-456492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-456492 not found
	I1217 01:55:11.845144 1483412 network_create.go:289] output of [docker network inspect newest-cni-456492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-456492 not found
	
	** /stderr **
	I1217 01:55:11.845236 1483412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:11.862130 1483412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:55:11.862454 1483412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:55:11.862764 1483412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:55:11.862966 1483412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 01:55:11.863436 1483412 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bb4b0}
	I1217 01:55:11.863452 1483412 network_create.go:124] attempt to create docker network newest-cni-456492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:55:11.863519 1483412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-456492 newest-cni-456492
	I1217 01:55:11.939566 1483412 network_create.go:108] docker network newest-cni-456492 192.168.85.0/24 created
	I1217 01:55:11.939593 1483412 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-456492" container
	I1217 01:55:11.939681 1483412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:55:11.956827 1483412 cli_runner.go:164] Run: docker volume create newest-cni-456492 --label name.minikube.sigs.k8s.io=newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:55:11.974528 1483412 oci.go:103] Successfully created a docker volume newest-cni-456492
	I1217 01:55:11.974628 1483412 cli_runner.go:164] Run: docker run --rm --name newest-cni-456492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --entrypoint /usr/bin/test -v newest-cni-456492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:55:12.497008 1483412 oci.go:107] Successfully prepared a docker volume newest-cni-456492
	I1217 01:55:12.497078 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:12.497091 1483412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:55:12.497172 1483412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:55:16.389962 1483412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.892749984s)
	I1217 01:55:16.389996 1483412 kic.go:203] duration metric: took 3.892902757s to extract preloaded images to volume ...
	W1217 01:55:16.390136 1483412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:55:16.390261 1483412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:55:16.462546 1483412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-456492 --name newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-456492 --network newest-cni-456492 --ip 192.168.85.2 --volume newest-cni-456492:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:55:16.772361 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Running}}
	I1217 01:55:16.793387 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:16.820136 1483412 cli_runner.go:164] Run: docker exec newest-cni-456492 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:55:16.881491 1483412 oci.go:144] the created container "newest-cni-456492" has a running status.
	I1217 01:55:16.881521 1483412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa...
	I1217 01:55:17.289070 1483412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:55:17.323822 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.352076 1483412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:55:17.352103 1483412 kic_runner.go:114] Args: [docker exec --privileged newest-cni-456492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:55:17.412601 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.440021 1483412 machine.go:94] provisionDockerMachine start ...
	I1217 01:55:17.440112 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:17.465337 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:17.465706 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:17.465717 1483412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:55:17.466482 1483412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45132->127.0.0.1:34249: read: connection reset by peer
	I1217 01:55:20.597038 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.597109 1483412 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 01:55:20.597212 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.614509 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.614828 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.614859 1483412 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 01:55:20.756257 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.756341 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.774598 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.774975 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.774999 1483412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:55:20.905912 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:55:20.905939 1483412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:55:20.905956 1483412 ubuntu.go:190] setting up certificates
	I1217 01:55:20.905965 1483412 provision.go:84] configureAuth start
	I1217 01:55:20.906024 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:20.923247 1483412 provision.go:143] copyHostCerts
	I1217 01:55:20.923326 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:55:20.923339 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:55:20.923416 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:55:20.923533 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:55:20.923544 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:55:20.923576 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:55:20.923649 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:55:20.923659 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:55:20.923689 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:55:20.923744 1483412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 01:55:21.003325 1483412 provision.go:177] copyRemoteCerts
	I1217 01:55:21.003406 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:55:21.003466 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.021337 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.118292 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:55:21.145239 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:55:21.164973 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:55:21.184653 1483412 provision.go:87] duration metric: took 278.664546ms to configureAuth
	I1217 01:55:21.184681 1483412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:55:21.184876 1483412 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:21.184890 1483412 machine.go:97] duration metric: took 3.744849982s to provisionDockerMachine
	I1217 01:55:21.184897 1483412 client.go:176] duration metric: took 9.372209957s to LocalClient.Create
	I1217 01:55:21.184913 1483412 start.go:167] duration metric: took 9.372271349s to libmachine.API.Create "newest-cni-456492"
	I1217 01:55:21.184924 1483412 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 01:55:21.184935 1483412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:55:21.184993 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:55:21.185038 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.202893 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.301704 1483412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:55:21.305094 1483412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:55:21.305120 1483412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:55:21.305132 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:55:21.305183 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:55:21.305257 1483412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:55:21.305367 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:55:21.313575 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:21.332690 1483412 start.go:296] duration metric: took 147.751178ms for postStartSetup
	I1217 01:55:21.333071 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.349950 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:21.350233 1483412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:55:21.350284 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.367086 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.458630 1483412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:55:21.463314 1483412 start.go:128] duration metric: took 9.65435334s to createHost
	I1217 01:55:21.463343 1483412 start.go:83] releasing machines lock for "newest-cni-456492", held for 9.654483449s
	I1217 01:55:21.463413 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.480150 1483412 ssh_runner.go:195] Run: cat /version.json
	I1217 01:55:21.480207 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.480490 1483412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:55:21.480549 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.503493 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.506377 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.593349 1483412 ssh_runner.go:195] Run: systemctl --version
	I1217 01:55:21.687982 1483412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:55:21.692115 1483412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:55:21.692182 1483412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:55:21.718403 1483412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:55:21.718427 1483412 start.go:496] detecting cgroup driver to use...
	I1217 01:55:21.718460 1483412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:55:21.718523 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:55:21.733259 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:55:21.746485 1483412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:55:21.746571 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:55:21.764553 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:55:21.782958 1483412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:55:21.908620 1483412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:55:22.026459 1483412 docker.go:234] disabling docker service ...
	I1217 01:55:22.026538 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:55:22.052603 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:55:22.068218 1483412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:55:22.193394 1483412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:55:22.321475 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:55:22.334922 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:55:22.349881 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:55:22.359035 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:55:22.368328 1483412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:55:22.368453 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:55:22.377717 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.387475 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:55:22.396690 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.405767 1483412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:55:22.414387 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:55:22.423447 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:55:22.432777 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:55:22.442244 1483412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:55:22.450102 1483412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:55:22.457779 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:22.584574 1483412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:55:22.739170 1483412 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:55:22.739315 1483412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:55:22.743653 1483412 start.go:564] Will wait 60s for crictl version
	I1217 01:55:22.743721 1483412 ssh_runner.go:195] Run: which crictl
	I1217 01:55:22.747627 1483412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:55:22.774963 1483412 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:55:22.775088 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.795646 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.822177 1483412 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:55:22.825213 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:22.841339 1483412 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 01:55:22.845097 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:22.857844 1483412 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:55:22.860750 1483412 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:55:22.860891 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:22.860986 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.887811 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.887838 1483412 containerd.go:534] Images already preloaded, skipping extraction
	I1217 01:55:22.887921 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.916774 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.916798 1483412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:55:22.916806 1483412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:55:22.916901 1483412 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:55:22.916973 1483412 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:55:22.941450 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:22.941474 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:22.941497 1483412 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:55:22.941521 1483412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:55:22.941668 1483412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:55:22.941741 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:55:22.949446 1483412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:55:22.949536 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:55:22.957307 1483412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:55:22.970080 1483412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:55:22.983144 1483412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 01:55:22.996455 1483412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:55:23.000264 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:23.011956 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:23.132195 1483412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:55:23.153898 1483412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 01:55:23.153924 1483412 certs.go:195] generating shared ca certs ...
	I1217 01:55:23.153953 1483412 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.154120 1483412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:55:23.154167 1483412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:55:23.154179 1483412 certs.go:257] generating profile certs ...
	I1217 01:55:23.154252 1483412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 01:55:23.154267 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt with IP's: []
	I1217 01:55:23.536556 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt ...
	I1217 01:55:23.536598 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt: {Name:mk5f328f97a5398eaf8448e799e55e14628a21cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536799 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key ...
	I1217 01:55:23.536813 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key: {Name:mk204e71ac4a7537095f4378fcacae497aae9e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536900 1483412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 01:55:23.536919 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:55:23.700587 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d ...
	I1217 01:55:23.700617 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d: {Name:mk2ff6ffd7e0f9e8790c41f75004f783e2e2cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700810 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d ...
	I1217 01:55:23.700838 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d: {Name:mk4a8fd878c1db6fa4ca6d31ac312311a9e574fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700939 1483412 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt
	I1217 01:55:23.701025 1483412 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key
	I1217 01:55:23.701086 1483412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 01:55:23.701104 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt with IP's: []
	I1217 01:55:24.186185 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt ...
	I1217 01:55:24.186218 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt: {Name:mk4e097689774236e217287c4769a9bc6b62d157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186434 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key ...
	I1217 01:55:24.186460 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key: {Name:mk9311419a1f9f3ab4e171bbfc5a685160d56892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186687 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:55:24.186737 1483412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:55:24.186753 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:55:24.186781 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:55:24.186819 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:55:24.186847 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:55:24.186901 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:24.187489 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:55:24.207140 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:55:24.225813 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:55:24.244898 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:55:24.264402 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:55:24.283038 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:55:24.302197 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:55:24.320347 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:55:24.339022 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:55:24.357411 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:55:24.375801 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:55:24.394312 1483412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:55:24.407959 1483412 ssh_runner.go:195] Run: openssl version
	I1217 01:55:24.414593 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.422149 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:55:24.429938 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433843 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433913 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.475535 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.483235 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.490706 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.498434 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:55:24.506686 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510403 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510492 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.551573 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:55:24.559261 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:55:24.566821 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.574182 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:55:24.581528 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585424 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585508 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.628267 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:55:24.636095 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:55:24.643970 1483412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:55:24.648671 1483412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:55:24.648775 1483412 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:24.648946 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:55:24.649043 1483412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:55:24.677969 1483412 cri.go:89] found id: ""
	I1217 01:55:24.678093 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:55:24.688459 1483412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:55:24.696458 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:55:24.696550 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:55:24.704828 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:55:24.704848 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:55:24.704931 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:55:24.712883 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:55:24.712983 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:55:24.720826 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:55:24.728999 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:55:24.729100 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:55:24.736825 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.744799 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:55:24.744867 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.752477 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:55:24.760816 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:55:24.760931 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:55:24.768678 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:55:24.810821 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:55:24.811126 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:55:24.896174 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:55:24.896294 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:55:24.896359 1483412 kubeadm.go:319] OS: Linux
	I1217 01:55:24.896426 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:55:24.896502 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:55:24.896566 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:55:24.896639 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:55:24.896704 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:55:24.896779 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:55:24.896863 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:55:24.896941 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:55:24.897010 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:55:24.971043 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:55:24.971234 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:55:24.971378 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:55:24.982063 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:55:24.988218 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:55:24.988318 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:55:24.988395 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:55:25.419455 1483412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:55:25.522339 1483412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:55:25.598229 1483412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:55:25.671518 1483412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:55:25.854804 1483412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:55:25.855019 1483412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.196066 1483412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:55:26.196425 1483412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.785707 1483412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:55:26.841556 1483412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:55:27.019008 1483412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:55:27.019328 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:55:27.196727 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:55:27.751450 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:55:27.908167 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:55:28.296645 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:55:28.549325 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:55:28.550095 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:55:28.554755 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:55:28.558438 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:55:28.558547 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:55:28.558629 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:55:28.558695 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:55:28.574196 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:55:28.574560 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:55:28.582119 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:55:28.582467 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:55:28.582759 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:55:28.732745 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:55:28.732882 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.124748 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:57:34.124781 1475658 kubeadm.go:319] 
	I1217 01:57:34.124851 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:57:34.130032 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.130094 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.130184 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.130239 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.130274 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.130319 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.130369 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.130417 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.130466 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.130513 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.130562 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.130607 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.130655 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.130701 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.130774 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.130869 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.130959 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.131021 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.134054 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.134142 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.134206 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.134273 1475658 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:57:34.134329 1475658 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:57:34.134389 1475658 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:57:34.134439 1475658 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:57:34.134492 1475658 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:57:34.134614 1475658 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134712 1475658 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:57:34.134885 1475658 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134988 1475658 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:57:34.135097 1475658 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:57:34.135183 1475658 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:57:34.135283 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:34.135344 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:34.135402 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:34.135459 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:34.135521 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:34.135575 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:34.135655 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:34.135721 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:34.138598 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:34.138713 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:34.138799 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:34.138871 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:34.138982 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:34.139083 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:34.139203 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:34.139301 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:34.139344 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:34.139483 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:34.139594 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.139663 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005993508s
	I1217 01:57:34.139667 1475658 kubeadm.go:319] 
	I1217 01:57:34.139728 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:57:34.139770 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:57:34.139882 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:57:34.139887 1475658 kubeadm.go:319] 
	I1217 01:57:34.139998 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:57:34.140032 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:57:34.140065 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:57:34.140174 1475658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:57:34.140253 1475658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:57:34.140626 1475658 kubeadm.go:319] 
	I1217 01:57:34.576208 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:57:34.589972 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:34.590043 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:34.598643 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:34.598705 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:34.598780 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:34.606738 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:34.606852 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:34.614781 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:34.622706 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:34.622772 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:34.630400 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.638446 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:34.638512 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.646373 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:34.654277 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:34.654364 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:34.662056 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:34.702011 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.702113 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.773814 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.773913 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.773969 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.774045 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.774109 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.774187 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.774266 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.774339 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.774416 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.774474 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.774547 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.774609 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.846561 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.846676 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.846767 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.854122 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.857357 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.857482 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.857567 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.857679 1475658 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:57:34.857759 1475658 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:57:34.857854 1475658 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:57:34.857924 1475658 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:57:34.858004 1475658 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:57:34.858087 1475658 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:57:34.858187 1475658 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:57:34.858274 1475658 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:57:34.858318 1475658 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:57:34.858386 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:35.122967 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:35.269702 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:35.473145 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:36.090186 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:36.438081 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:36.439114 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:36.441843 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:36.444972 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:36.445093 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:36.445187 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:36.447586 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:36.469683 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:36.469812 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:36.477712 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:36.478146 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:36.478375 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:36.619400 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:36.619522 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:59:28.732281 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001211696s
	I1217 01:59:28.732307 1483412 kubeadm.go:319] 
	I1217 01:59:28.732365 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:59:28.732399 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:59:28.732504 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:59:28.732508 1483412 kubeadm.go:319] 
	I1217 01:59:28.732613 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:59:28.732645 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:59:28.732676 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:59:28.732680 1483412 kubeadm.go:319] 
	I1217 01:59:28.737697 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:59:28.738161 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:59:28.738281 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:59:28.738538 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:59:28.738549 1483412 kubeadm.go:319] 
	I1217 01:59:28.738623 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:59:28.738846 1483412 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:59:28.738945 1483412 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:59:29.148897 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:59:29.163236 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:59:29.163322 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:59:29.173290 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:59:29.173315 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:59:29.173378 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:59:29.189171 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:59:29.189238 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:59:29.198769 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:59:29.206895 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:59:29.206960 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:59:29.214464 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.222503 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:59:29.222596 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.230032 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:59:29.237621 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:59:29.237713 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:59:29.244936 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:59:29.283887 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:59:29.284148 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:59:29.355640 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:59:29.355800 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:59:29.355878 1483412 kubeadm.go:319] OS: Linux
	I1217 01:59:29.355962 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:59:29.356047 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:59:29.356127 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:59:29.356205 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:59:29.356285 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:59:29.356371 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:59:29.356449 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:59:29.356530 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:59:29.356609 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:59:29.424082 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:59:29.424247 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:59:29.424404 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:59:29.430675 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:59:29.436331 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:59:29.436427 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:59:29.436498 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:59:29.436614 1483412 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:59:29.436760 1483412 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:59:29.436868 1483412 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:59:29.436955 1483412 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:59:29.437066 1483412 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:59:29.437169 1483412 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:59:29.437294 1483412 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:59:29.437455 1483412 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:59:29.437914 1483412 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:59:29.438023 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:59:29.643674 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:59:29.811188 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:59:30.039930 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:59:30.429283 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:59:30.523266 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:59:30.523965 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:59:30.526610 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:59:30.529865 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:59:30.529993 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:59:30.530148 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:59:30.530270 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:59:30.551379 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:59:30.551496 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:59:30.562968 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:59:30.563492 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:59:30.563746 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:59:30.712531 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:59:30.712658 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:36.620744 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001388969s
	I1217 02:01:36.620785 1475658 kubeadm.go:319] 
	I1217 02:01:36.620840 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:36.620873 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:36.620977 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:36.620988 1475658 kubeadm.go:319] 
	I1217 02:01:36.621087 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:36.621122 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:36.621154 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:36.621162 1475658 kubeadm.go:319] 
	I1217 02:01:36.624858 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:01:36.625354 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:36.625468 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:36.625731 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:36.625742 1475658 kubeadm.go:319] 
	I1217 02:01:36.625808 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:36.625889 1475658 kubeadm.go:403] duration metric: took 8m7.357719708s to StartCluster
	I1217 02:01:36.625944 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:36.626024 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:36.652571 1475658 cri.go:89] found id: ""
	I1217 02:01:36.652609 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.652624 1475658 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:36.652631 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:01:36.652704 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:36.678690 1475658 cri.go:89] found id: ""
	I1217 02:01:36.678713 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.678721 1475658 logs.go:284] No container was found matching "etcd"
	I1217 02:01:36.678728 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:01:36.678789 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:36.705351 1475658 cri.go:89] found id: ""
	I1217 02:01:36.705375 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.705383 1475658 logs.go:284] No container was found matching "coredns"
	I1217 02:01:36.705389 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:36.705452 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:36.730965 1475658 cri.go:89] found id: ""
	I1217 02:01:36.730992 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.731001 1475658 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:36.731008 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:36.731070 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:36.760345 1475658 cri.go:89] found id: ""
	I1217 02:01:36.760370 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.760379 1475658 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:36.760385 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:36.760446 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:36.785560 1475658 cri.go:89] found id: ""
	I1217 02:01:36.785583 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.785592 1475658 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:36.785599 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:36.785697 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:36.814303 1475658 cri.go:89] found id: ""
	I1217 02:01:36.814328 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.814337 1475658 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:36.814347 1475658 logs.go:123] Gathering logs for container status ...
	I1217 02:01:36.814359 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:36.842640 1475658 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:36.842668 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:36.901858 1475658 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:36.901897 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:36.918036 1475658 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:36.918069 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:36.984314 1475658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:36.984350 1475658 logs.go:123] Gathering logs for containerd ...
	I1217 02:01:36.984362 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:01:37.028786 1475658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:01:37.028860 1475658 out.go:285] * 
	W1217 02:01:37.028917 1475658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.028931 1475658 out.go:285] * 
	W1217 02:01:37.031068 1475658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:01:37.037220 1475658 out.go:203] 
	W1217 02:01:37.040930 1475658 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.041001 1475658 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:01:37.041022 1475658 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:01:37.044273 1475658 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:53:20 no-preload-178365 containerd[756]: time="2025-12-17T01:53:20.013986261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.083205389Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.085894407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.093386032Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.094057489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.042937201Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.045143048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058075151Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058727605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.132008848Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.135132972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.143661850Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.144058260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.267145399Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.269771295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.277531008Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.278492420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.715372635Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.717609801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.726893123Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.727845953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.108154182Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.111113669Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120555130Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120954125Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:41.210564    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:41.211281    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:41.212943    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:41.214472    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:41.215889    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:01:41 up  7:44,  0 user,  load average: 0.14, 0.96, 1.67
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:01:37 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:38 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:38 no-preload-178365 kubelet[5564]: E1217 02:01:38.622753    5564 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:38 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:39 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:39 no-preload-178365 kubelet[5609]: E1217 02:01:39.473532    5609 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:39 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:40 no-preload-178365 kubelet[5699]: E1217 02:01:40.190024    5699 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:40 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:01:40 no-preload-178365 kubelet[5735]: E1217 02:01:40.955880    5735 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:01:40 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 6 (358.650766ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:01:41.698439 1492360 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (83.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 02:01:56.876733 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:02:01.144659 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m21.637816091s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-178365 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-178365 describe deploy/metrics-server -n kube-system: exit status 1 (76.223321ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-178365" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-178365 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1475961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:53:10.944588207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbc378cb18c4db6321bba9064bec37ae2907203c00dcd497af9edc9b3f71361f",
	            "SandboxKey": "/var/run/docker/netns/dbc378cb18c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:a8:78:cd:87:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "46c074d2d98270a72981dceacb4c45383893c762846fd2a67a1498e3670844fd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 6 (298.263726ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:03.742837 1493840 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:51 UTC │ 17 Dec 25 01:52 UTC │
	│ image   │ old-k8s-version-859530 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ pause   │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ unpause │ -p old-k8s-version-859530 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:55:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:55:11.587586 1483412 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:11.587793 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.587821 1483412 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:11.587840 1483412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:11.588238 1483412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:55:11.589101 1483412 out.go:368] Setting JSON to false
	I1217 01:55:11.589983 1483412 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27462,"bootTime":1765909050,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:55:11.590050 1483412 start.go:143] virtualization:  
	I1217 01:55:11.594008 1483412 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:55:11.598404 1483412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:55:11.598486 1483412 notify.go:221] Checking for updates...
	I1217 01:55:11.605445 1483412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:55:11.608601 1483412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:55:11.611778 1483412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:55:11.614850 1483412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:55:11.617933 1483412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:55:11.621419 1483412 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:11.621527 1483412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:55:11.640802 1483412 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:55:11.640922 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.701423 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.691901377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.701533 1483412 docker.go:319] overlay module found
	I1217 01:55:11.704806 1483412 out.go:179] * Using the docker driver based on user configuration
	I1217 01:55:11.707752 1483412 start.go:309] selected driver: docker
	I1217 01:55:11.707769 1483412 start.go:927] validating driver "docker" against <nil>
	I1217 01:55:11.707784 1483412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:55:11.708522 1483412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:55:11.771255 1483412 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:55:11.762421806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:55:11.771409 1483412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:55:11.771445 1483412 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:55:11.771663 1483412 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:55:11.774669 1483412 out.go:179] * Using Docker driver with root privileges
	I1217 01:55:11.777523 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:11.777592 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:11.777607 1483412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:55:11.777735 1483412 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:11.780890 1483412 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 01:55:11.783718 1483412 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 01:55:11.786584 1483412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:55:11.789380 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:11.789429 1483412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 01:55:11.789441 1483412 cache.go:65] Caching tarball of preloaded images
	I1217 01:55:11.789467 1483412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:55:11.789532 1483412 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 01:55:11.789541 1483412 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 01:55:11.789677 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:11.789696 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json: {Name:mk81bb26d654057444403d949cc7b962f958f7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:11.808673 1483412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:55:11.808698 1483412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:55:11.808713 1483412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:55:11.808743 1483412 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:55:11.808846 1483412 start.go:364] duration metric: took 82.725µs to acquireMachinesLock for "newest-cni-456492"
	I1217 01:55:11.808876 1483412 start.go:93] Provisioning new machine with config: &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 01:55:11.808947 1483412 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:55:11.812418 1483412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:55:11.812643 1483412 start.go:159] libmachine.API.Create for "newest-cni-456492" (driver="docker")
	I1217 01:55:11.812678 1483412 client.go:173] LocalClient.Create starting
	I1217 01:55:11.812766 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 01:55:11.812806 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812824 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.812874 1483412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 01:55:11.812896 1483412 main.go:143] libmachine: Decoding PEM data...
	I1217 01:55:11.812911 1483412 main.go:143] libmachine: Parsing certificate...
	I1217 01:55:11.813288 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:55:11.828937 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:55:11.829030 1483412 network_create.go:284] running [docker network inspect newest-cni-456492] to gather additional debugging logs...
	I1217 01:55:11.829050 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492
	W1217 01:55:11.845086 1483412 cli_runner.go:211] docker network inspect newest-cni-456492 returned with exit code 1
	I1217 01:55:11.845116 1483412 network_create.go:287] error running [docker network inspect newest-cni-456492]: docker network inspect newest-cni-456492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-456492 not found
	I1217 01:55:11.845144 1483412 network_create.go:289] output of [docker network inspect newest-cni-456492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-456492 not found
	
	** /stderr **
	I1217 01:55:11.845236 1483412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:11.862130 1483412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 01:55:11.862454 1483412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 01:55:11.862764 1483412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 01:55:11.862966 1483412 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 01:55:11.863436 1483412 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bb4b0}
	I1217 01:55:11.863452 1483412 network_create.go:124] attempt to create docker network newest-cni-456492 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:55:11.863519 1483412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-456492 newest-cni-456492
	I1217 01:55:11.939566 1483412 network_create.go:108] docker network newest-cni-456492 192.168.85.0/24 created
	I1217 01:55:11.939593 1483412 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-456492" container
	I1217 01:55:11.939681 1483412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:55:11.956827 1483412 cli_runner.go:164] Run: docker volume create newest-cni-456492 --label name.minikube.sigs.k8s.io=newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:55:11.974528 1483412 oci.go:103] Successfully created a docker volume newest-cni-456492
	I1217 01:55:11.974628 1483412 cli_runner.go:164] Run: docker run --rm --name newest-cni-456492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --entrypoint /usr/bin/test -v newest-cni-456492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:55:12.497008 1483412 oci.go:107] Successfully prepared a docker volume newest-cni-456492
	I1217 01:55:12.497078 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:12.497091 1483412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:55:12.497172 1483412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:55:16.389962 1483412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-456492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.892749984s)
	I1217 01:55:16.389996 1483412 kic.go:203] duration metric: took 3.892902757s to extract preloaded images to volume ...
	W1217 01:55:16.390136 1483412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:55:16.390261 1483412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:55:16.462546 1483412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-456492 --name newest-cni-456492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-456492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-456492 --network newest-cni-456492 --ip 192.168.85.2 --volume newest-cni-456492:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:55:16.772361 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Running}}
	I1217 01:55:16.793387 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:16.820136 1483412 cli_runner.go:164] Run: docker exec newest-cni-456492 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:55:16.881491 1483412 oci.go:144] the created container "newest-cni-456492" has a running status.
	I1217 01:55:16.881521 1483412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa...
	I1217 01:55:17.289070 1483412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:55:17.323822 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.352076 1483412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:55:17.352103 1483412 kic_runner.go:114] Args: [docker exec --privileged newest-cni-456492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:55:17.412601 1483412 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 01:55:17.440021 1483412 machine.go:94] provisionDockerMachine start ...
	I1217 01:55:17.440112 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:17.465337 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:17.465706 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:17.465717 1483412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:55:17.466482 1483412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45132->127.0.0.1:34249: read: connection reset by peer
	I1217 01:55:20.597038 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.597109 1483412 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 01:55:20.597212 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.614509 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.614828 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.614859 1483412 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 01:55:20.756257 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 01:55:20.756341 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:20.774598 1483412 main.go:143] libmachine: Using SSH client type: native
	I1217 01:55:20.774975 1483412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I1217 01:55:20.774999 1483412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:55:20.905912 1483412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:55:20.905939 1483412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 01:55:20.905956 1483412 ubuntu.go:190] setting up certificates
	I1217 01:55:20.905965 1483412 provision.go:84] configureAuth start
	I1217 01:55:20.906024 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:20.923247 1483412 provision.go:143] copyHostCerts
	I1217 01:55:20.923326 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 01:55:20.923339 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 01:55:20.923416 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 01:55:20.923533 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 01:55:20.923544 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 01:55:20.923576 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 01:55:20.923649 1483412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 01:55:20.923659 1483412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 01:55:20.923689 1483412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 01:55:20.923744 1483412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 01:55:21.003325 1483412 provision.go:177] copyRemoteCerts
	I1217 01:55:21.003406 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:55:21.003466 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.021337 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.118292 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:55:21.145239 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:55:21.164973 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:55:21.184653 1483412 provision.go:87] duration metric: took 278.664546ms to configureAuth
	I1217 01:55:21.184681 1483412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:55:21.184876 1483412 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:55:21.184890 1483412 machine.go:97] duration metric: took 3.744849982s to provisionDockerMachine
	I1217 01:55:21.184897 1483412 client.go:176] duration metric: took 9.372209957s to LocalClient.Create
	I1217 01:55:21.184913 1483412 start.go:167] duration metric: took 9.372271349s to libmachine.API.Create "newest-cni-456492"
	I1217 01:55:21.184924 1483412 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 01:55:21.184935 1483412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:55:21.184993 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:55:21.185038 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.202893 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.301704 1483412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:55:21.305094 1483412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:55:21.305120 1483412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:55:21.305132 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 01:55:21.305183 1483412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 01:55:21.305257 1483412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 01:55:21.305367 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:55:21.313575 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:21.332690 1483412 start.go:296] duration metric: took 147.751178ms for postStartSetup
	I1217 01:55:21.333071 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.349950 1483412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 01:55:21.350233 1483412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:55:21.350284 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.367086 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.458630 1483412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:55:21.463314 1483412 start.go:128] duration metric: took 9.65435334s to createHost
	I1217 01:55:21.463343 1483412 start.go:83] releasing machines lock for "newest-cni-456492", held for 9.654483449s
	I1217 01:55:21.463413 1483412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 01:55:21.480150 1483412 ssh_runner.go:195] Run: cat /version.json
	I1217 01:55:21.480207 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.480490 1483412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:55:21.480549 1483412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 01:55:21.503493 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.506377 1483412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 01:55:21.593349 1483412 ssh_runner.go:195] Run: systemctl --version
	I1217 01:55:21.687982 1483412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:55:21.692115 1483412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:55:21.692182 1483412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:55:21.718403 1483412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:55:21.718427 1483412 start.go:496] detecting cgroup driver to use...
	I1217 01:55:21.718460 1483412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:55:21.718523 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 01:55:21.733259 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:55:21.746485 1483412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:55:21.746571 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:55:21.764553 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:55:21.782958 1483412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:55:21.908620 1483412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:55:22.026459 1483412 docker.go:234] disabling docker service ...
	I1217 01:55:22.026538 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:55:22.052603 1483412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:55:22.068218 1483412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:55:22.193394 1483412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:55:22.321475 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:55:22.334922 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:55:22.349881 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:55:22.359035 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:55:22.368328 1483412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:55:22.368453 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:55:22.377717 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.387475 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:55:22.396690 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:55:22.405767 1483412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:55:22.414387 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:55:22.423447 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:55:22.432777 1483412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:55:22.442244 1483412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:55:22.450102 1483412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:55:22.457779 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:22.584574 1483412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:55:22.739170 1483412 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 01:55:22.739315 1483412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 01:55:22.743653 1483412 start.go:564] Will wait 60s for crictl version
	I1217 01:55:22.743721 1483412 ssh_runner.go:195] Run: which crictl
	I1217 01:55:22.747627 1483412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:55:22.774963 1483412 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 01:55:22.775088 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.795646 1483412 ssh_runner.go:195] Run: containerd --version
	I1217 01:55:22.822177 1483412 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 01:55:22.825213 1483412 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:55:22.841339 1483412 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 01:55:22.845097 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:22.857844 1483412 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:55:22.860750 1483412 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:55:22.860891 1483412 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 01:55:22.860986 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.887811 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.887838 1483412 containerd.go:534] Images already preloaded, skipping extraction
	I1217 01:55:22.887921 1483412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:55:22.916774 1483412 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 01:55:22.916798 1483412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:55:22.916806 1483412 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 01:55:22.916901 1483412 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:55:22.916973 1483412 ssh_runner.go:195] Run: sudo crictl info
	I1217 01:55:22.941450 1483412 cni.go:84] Creating CNI manager for ""
	I1217 01:55:22.941474 1483412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 01:55:22.941497 1483412 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:55:22.941521 1483412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:55:22.941668 1483412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:55:22.941741 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:55:22.949446 1483412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:55:22.949536 1483412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:55:22.957307 1483412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 01:55:22.970080 1483412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:55:22.983144 1483412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 01:55:22.996455 1483412 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:55:23.000264 1483412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:55:23.011956 1483412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:55:23.132195 1483412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:55:23.153898 1483412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 01:55:23.153924 1483412 certs.go:195] generating shared ca certs ...
	I1217 01:55:23.153953 1483412 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.154120 1483412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 01:55:23.154167 1483412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 01:55:23.154179 1483412 certs.go:257] generating profile certs ...
	I1217 01:55:23.154252 1483412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 01:55:23.154267 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt with IP's: []
	I1217 01:55:23.536556 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt ...
	I1217 01:55:23.536598 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.crt: {Name:mk5f328f97a5398eaf8448e799e55e14628a21cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536799 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key ...
	I1217 01:55:23.536813 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key: {Name:mk204e71ac4a7537095f4378fcacae497aae9e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.536900 1483412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 01:55:23.536919 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:55:23.700587 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d ...
	I1217 01:55:23.700617 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d: {Name:mk2ff6ffd7e0f9e8790c41f75004f783e2e2cbfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700810 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d ...
	I1217 01:55:23.700838 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d: {Name:mk4a8fd878c1db6fa4ca6d31ac312311a9e574fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:23.700939 1483412 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt
	I1217 01:55:23.701025 1483412 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key
	I1217 01:55:23.701086 1483412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 01:55:23.701104 1483412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt with IP's: []
	I1217 01:55:24.186185 1483412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt ...
	I1217 01:55:24.186218 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt: {Name:mk4e097689774236e217287c4769a9bc6b62d157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186434 1483412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key ...
	I1217 01:55:24.186460 1483412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key: {Name:mk9311419a1f9f3ab4e171bbfc5a685160d56892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:55:24.186687 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 01:55:24.186737 1483412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 01:55:24.186753 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:55:24.186781 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:55:24.186819 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:55:24.186847 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 01:55:24.186901 1483412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 01:55:24.187489 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:55:24.207140 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 01:55:24.225813 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:55:24.244898 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 01:55:24.264402 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:55:24.283038 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:55:24.302197 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:55:24.320347 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:55:24.339022 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 01:55:24.357411 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:55:24.375801 1483412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 01:55:24.394312 1483412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:55:24.407959 1483412 ssh_runner.go:195] Run: openssl version
	I1217 01:55:24.414593 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.422149 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 01:55:24.429938 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433843 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.433913 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 01:55:24.475535 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.483235 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:55:24.490706 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.498434 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:55:24.506686 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510403 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.510492 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:55:24.551573 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:55:24.559261 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:55:24.566821 1483412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.574182 1483412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 01:55:24.581528 1483412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585424 1483412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.585508 1483412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 01:55:24.628267 1483412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:55:24.636095 1483412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 01:55:24.643970 1483412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:55:24.648671 1483412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:55:24.648775 1483412 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:55:24.648946 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 01:55:24.649043 1483412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:55:24.677969 1483412 cri.go:89] found id: ""
	I1217 01:55:24.678093 1483412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:55:24.688459 1483412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:55:24.696458 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:55:24.696550 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:55:24.704828 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:55:24.704848 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:55:24.704931 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:55:24.712883 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:55:24.712983 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:55:24.720826 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:55:24.728999 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:55:24.729100 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:55:24.736825 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.744799 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:55:24.744867 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:55:24.752477 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:55:24.760816 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:55:24.760931 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:55:24.768678 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:55:24.810821 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:55:24.811126 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:55:24.896174 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:55:24.896294 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:55:24.896359 1483412 kubeadm.go:319] OS: Linux
	I1217 01:55:24.896426 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:55:24.896502 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:55:24.896566 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:55:24.896639 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:55:24.896704 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:55:24.896779 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:55:24.896863 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:55:24.896941 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:55:24.897010 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:55:24.971043 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:55:24.971234 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:55:24.971378 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:55:24.982063 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:55:24.988218 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:55:24.988318 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:55:24.988395 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:55:25.419455 1483412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:55:25.522339 1483412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:55:25.598229 1483412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:55:25.671518 1483412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:55:25.854804 1483412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:55:25.855019 1483412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.196066 1483412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:55:26.196425 1483412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 01:55:26.785707 1483412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:55:26.841556 1483412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:55:27.019008 1483412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:55:27.019328 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:55:27.196727 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:55:27.751450 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:55:27.908167 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:55:28.296645 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:55:28.549325 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:55:28.550095 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:55:28.554755 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:55:28.558438 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:55:28.558547 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:55:28.558629 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:55:28.558695 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:55:28.574196 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:55:28.574560 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:55:28.582119 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:55:28.582467 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:55:28.582759 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:55:28.732745 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:55:28.732882 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.124748 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:57:34.124781 1475658 kubeadm.go:319] 
	I1217 01:57:34.124851 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:57:34.130032 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.130094 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.130184 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.130239 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.130274 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.130319 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.130369 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.130417 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.130466 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.130513 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.130562 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.130607 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.130655 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.130701 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.130774 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.130869 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.130959 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.131021 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.134054 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.134142 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.134206 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.134273 1475658 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:57:34.134329 1475658 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:57:34.134389 1475658 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:57:34.134439 1475658 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:57:34.134492 1475658 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:57:34.134614 1475658 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134712 1475658 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:57:34.134885 1475658 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 01:57:34.134988 1475658 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:57:34.135097 1475658 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:57:34.135183 1475658 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:57:34.135283 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:34.135344 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:34.135402 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:34.135459 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:34.135521 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:34.135575 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:34.135655 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:34.135721 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:34.138598 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:34.138713 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:34.138799 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:34.138871 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:34.138982 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:34.139083 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:34.139203 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:34.139301 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:34.139344 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:34.139483 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:34.139594 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:57:34.139663 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005993508s
	I1217 01:57:34.139667 1475658 kubeadm.go:319] 
	I1217 01:57:34.139728 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:57:34.139770 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:57:34.139882 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:57:34.139887 1475658 kubeadm.go:319] 
	I1217 01:57:34.139998 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:57:34.140032 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:57:34.140065 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:57:34.140174 1475658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178365] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005993508s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:57:34.140253 1475658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:57:34.140626 1475658 kubeadm.go:319] 
	I1217 01:57:34.576208 1475658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:57:34.589972 1475658 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:34.590043 1475658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:34.598643 1475658 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:34.598705 1475658 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:34.598780 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:34.606738 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:34.606852 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:34.614781 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:34.622706 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:34.622772 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:34.630400 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.638446 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:34.638512 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:34.646373 1475658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:34.654277 1475658 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:34.654364 1475658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:34.662056 1475658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:34.702011 1475658 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:57:34.702113 1475658 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:57:34.773814 1475658 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:57:34.773913 1475658 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:57:34.773969 1475658 kubeadm.go:319] OS: Linux
	I1217 01:57:34.774045 1475658 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:57:34.774109 1475658 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:57:34.774187 1475658 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:57:34.774266 1475658 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:57:34.774339 1475658 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:57:34.774416 1475658 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:57:34.774474 1475658 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:57:34.774547 1475658 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:57:34.774609 1475658 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:57:34.846561 1475658 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:57:34.846676 1475658 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:57:34.846767 1475658 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:57:34.854122 1475658 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:57:34.857357 1475658 out.go:252]   - Generating certificates and keys ...
	I1217 01:57:34.857482 1475658 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:57:34.857567 1475658 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:57:34.857679 1475658 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:57:34.857759 1475658 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:57:34.857854 1475658 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:57:34.857924 1475658 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:57:34.858004 1475658 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:57:34.858087 1475658 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:57:34.858187 1475658 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:57:34.858274 1475658 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:57:34.858318 1475658 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:57:34.858386 1475658 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:57:35.122967 1475658 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:57:35.269702 1475658 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:57:35.473145 1475658 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:57:36.090186 1475658 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:57:36.438081 1475658 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:57:36.439114 1475658 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:57:36.441843 1475658 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:57:36.444972 1475658 out.go:252]   - Booting up control plane ...
	I1217 01:57:36.445093 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:57:36.445187 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:57:36.447586 1475658 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:57:36.469683 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:57:36.469812 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:57:36.477712 1475658 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:57:36.478146 1475658 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:57:36.478375 1475658 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:57:36.619400 1475658 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:57:36.619522 1475658 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:59:28.732281 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001211696s
	I1217 01:59:28.732307 1483412 kubeadm.go:319] 
	I1217 01:59:28.732365 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:59:28.732399 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:59:28.732504 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:59:28.732508 1483412 kubeadm.go:319] 
	I1217 01:59:28.732613 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:59:28.732645 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:59:28.732676 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:59:28.732680 1483412 kubeadm.go:319] 
	I1217 01:59:28.737697 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:59:28.738161 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:59:28.738281 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:59:28.738538 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:59:28.738549 1483412 kubeadm.go:319] 
	I1217 01:59:28.738623 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 01:59:28.738846 1483412 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456492] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001211696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:59:28.738945 1483412 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 01:59:29.148897 1483412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:59:29.163236 1483412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:59:29.163322 1483412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:59:29.173290 1483412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:59:29.173315 1483412 kubeadm.go:158] found existing configuration files:
	
	I1217 01:59:29.173378 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:59:29.189171 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:59:29.189238 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:59:29.198769 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:59:29.206895 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:59:29.206960 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:59:29.214464 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.222503 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:59:29.222596 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:59:29.230032 1483412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:59:29.237621 1483412 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:59:29.237713 1483412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:59:29.244936 1483412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:59:29.283887 1483412 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:59:29.284148 1483412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:59:29.355640 1483412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:59:29.355800 1483412 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:59:29.355878 1483412 kubeadm.go:319] OS: Linux
	I1217 01:59:29.355962 1483412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:59:29.356047 1483412 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:59:29.356127 1483412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:59:29.356205 1483412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:59:29.356285 1483412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:59:29.356371 1483412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:59:29.356449 1483412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:59:29.356530 1483412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:59:29.356609 1483412 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:59:29.424082 1483412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:59:29.424247 1483412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:59:29.424404 1483412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:59:29.430675 1483412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:59:29.436331 1483412 out.go:252]   - Generating certificates and keys ...
	I1217 01:59:29.436427 1483412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:59:29.436498 1483412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:59:29.436614 1483412 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:59:29.436760 1483412 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:59:29.436868 1483412 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:59:29.436955 1483412 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:59:29.437066 1483412 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:59:29.437169 1483412 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:59:29.437294 1483412 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:59:29.437455 1483412 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:59:29.437914 1483412 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:59:29.438023 1483412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:59:29.643674 1483412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:59:29.811188 1483412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:59:30.039930 1483412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:59:30.429283 1483412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:59:30.523266 1483412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:59:30.523965 1483412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:59:30.526610 1483412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:59:30.529865 1483412 out.go:252]   - Booting up control plane ...
	I1217 01:59:30.529993 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:59:30.530148 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:59:30.530270 1483412 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:59:30.551379 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:59:30.551496 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:59:30.562968 1483412 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:59:30.563492 1483412 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:59:30.563746 1483412 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:59:30.712531 1483412 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:59:30.712658 1483412 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:36.620744 1475658 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001388969s
	I1217 02:01:36.620785 1475658 kubeadm.go:319] 
	I1217 02:01:36.620840 1475658 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:36.620873 1475658 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:36.620977 1475658 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:36.620988 1475658 kubeadm.go:319] 
	I1217 02:01:36.621087 1475658 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:36.621122 1475658 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:36.621154 1475658 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:36.621162 1475658 kubeadm.go:319] 
	I1217 02:01:36.624858 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:01:36.625354 1475658 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:36.625468 1475658 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:36.625731 1475658 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:36.625742 1475658 kubeadm.go:319] 
	I1217 02:01:36.625808 1475658 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:36.625889 1475658 kubeadm.go:403] duration metric: took 8m7.357719708s to StartCluster
	I1217 02:01:36.625944 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:36.626024 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:36.652571 1475658 cri.go:89] found id: ""
	I1217 02:01:36.652609 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.652624 1475658 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:36.652631 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:01:36.652704 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:36.678690 1475658 cri.go:89] found id: ""
	I1217 02:01:36.678713 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.678721 1475658 logs.go:284] No container was found matching "etcd"
	I1217 02:01:36.678728 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:01:36.678789 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:36.705351 1475658 cri.go:89] found id: ""
	I1217 02:01:36.705375 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.705383 1475658 logs.go:284] No container was found matching "coredns"
	I1217 02:01:36.705389 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:36.705452 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:36.730965 1475658 cri.go:89] found id: ""
	I1217 02:01:36.730992 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.731001 1475658 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:36.731008 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:36.731070 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:36.760345 1475658 cri.go:89] found id: ""
	I1217 02:01:36.760370 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.760379 1475658 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:36.760385 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:36.760446 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:36.785560 1475658 cri.go:89] found id: ""
	I1217 02:01:36.785583 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.785592 1475658 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:36.785599 1475658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:36.785697 1475658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:36.814303 1475658 cri.go:89] found id: ""
	I1217 02:01:36.814328 1475658 logs.go:282] 0 containers: []
	W1217 02:01:36.814337 1475658 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:36.814347 1475658 logs.go:123] Gathering logs for container status ...
	I1217 02:01:36.814359 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:36.842640 1475658 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:36.842668 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:36.901858 1475658 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:36.901897 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:36.918036 1475658 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:36.918069 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:36.984314 1475658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:01:36.976635    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.977198    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.978728    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.979278    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:01:36.980881    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:36.984350 1475658 logs.go:123] Gathering logs for containerd ...
	I1217 02:01:36.984362 1475658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:01:37.028786 1475658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:01:37.028860 1475658 out.go:285] * 
	W1217 02:01:37.028917 1475658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.028931 1475658 out.go:285] * 
	W1217 02:01:37.031068 1475658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:01:37.037220 1475658 out.go:203] 
	W1217 02:01:37.040930 1475658 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001388969s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:01:37.041001 1475658 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:01:37.041022 1475658 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:01:37.044273 1475658 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:53:20 no-preload-178365 containerd[756]: time="2025-12-17T01:53:20.013986261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.083205389Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.085894407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.093386032Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:21 no-preload-178365 containerd[756]: time="2025-12-17T01:53:21.094057489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.042937201Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.045143048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058075151Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:22 no-preload-178365 containerd[756]: time="2025-12-17T01:53:22.058727605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.132008848Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.135132972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.143661850Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:23 no-preload-178365 containerd[756]: time="2025-12-17T01:53:23.144058260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.267145399Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.269771295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.277531008Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:24 no-preload-178365 containerd[756]: time="2025-12-17T01:53:24.278492420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.715372635Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.717609801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.726893123Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:25 no-preload-178365 containerd[756]: time="2025-12-17T01:53:25.727845953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.108154182Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.111113669Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120555130Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 01:53:26 no-preload-178365 containerd[756]: time="2025-12-17T01:53:26.120954125Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:04.392768    6675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:04.393459    6675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:04.395388    6675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:04.395944    6675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:04.397766    6675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:03:04 up  7:45,  0 user,  load average: 0.94, 1.00, 1.62
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:03:01 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:01 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 433.
	Dec 17 02:03:01 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:01 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:01 no-preload-178365 kubelet[6553]: E1217 02:03:01.946140    6553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:01 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:01 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:02 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 434.
	Dec 17 02:03:02 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:02 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:02 no-preload-178365 kubelet[6558]: E1217 02:03:02.682892    6558 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:02 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:02 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:03 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 435.
	Dec 17 02:03:03 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:03 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:03 no-preload-178365 kubelet[6569]: E1217 02:03:03.470962    6569 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:03 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:03 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:04 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 436.
	Dec 17 02:03:04 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:04 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:04 no-preload-178365 kubelet[6623]: E1217 02:03:04.230912    6623 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:04 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:04 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 6 (318.628738ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:04.841426 1494061 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (83.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.265812228s)

                                                
                                                
-- stdout --
	* [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 02:03:06.446138 1494358 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:03:06.446331 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446344 1494358 out.go:374] Setting ErrFile to fd 2...
	I1217 02:03:06.446349 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446613 1494358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:03:06.446996 1494358 out.go:368] Setting JSON to false
	I1217 02:03:06.447949 1494358 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27937,"bootTime":1765909050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:03:06.448026 1494358 start.go:143] virtualization:  
	I1217 02:03:06.451183 1494358 out.go:179] * [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:03:06.455055 1494358 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:03:06.455196 1494358 notify.go:221] Checking for updates...
	I1217 02:03:06.461067 1494358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:03:06.464077 1494358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:06.467522 1494358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:03:06.470660 1494358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:03:06.473573 1494358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:03:06.476917 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:06.477577 1494358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:03:06.504584 1494358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:03:06.504713 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.568470 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.558714769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.568580 1494358 docker.go:319] overlay module found
	I1217 02:03:06.571663 1494358 out.go:179] * Using the docker driver based on existing profile
	I1217 02:03:06.574409 1494358 start.go:309] selected driver: docker
	I1217 02:03:06.574441 1494358 start.go:927] validating driver "docker" against &{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.574538 1494358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:03:06.575218 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.633705 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.62420129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.634037 1494358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:03:06.634074 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:06.634136 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:06.634181 1494358 start.go:353] cluster config:
	{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.637255 1494358 out.go:179] * Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	I1217 02:03:06.640178 1494358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:03:06.642991 1494358 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:03:06.645784 1494358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:03:06.645819 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:06.645947 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:06.646262 1494358 cache.go:107] acquiring lock: {Name:mk4890d4b47ae1973de2f5e1f0682feb41ee40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646336 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 02:03:06.646344 1494358 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.402µs
	I1217 02:03:06.646356 1494358 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 02:03:06.646368 1494358 cache.go:107] acquiring lock: {Name:mk966096fd85af29d80d70ba567f975fd1c8ab20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646398 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:03:06.646403 1494358 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.063µs
	I1217 02:03:06.646410 1494358 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646419 1494358 cache.go:107] acquiring lock: {Name:mkf4d095c495df29849f640a0755588b041f7643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646446 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:03:06.646451 1494358 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.19µs
	I1217 02:03:06.646458 1494358 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646468 1494358 cache.go:107] acquiring lock: {Name:mk1c22383e6094d20d836c3a904bbbe609668a02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646495 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:03:06.646500 1494358 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.599µs
	I1217 02:03:06.646506 1494358 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646514 1494358 cache.go:107] acquiring lock: {Name:mkc3683c3186a723f5651545e5f013a6bc8b78e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646539 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1217 02:03:06.646545 1494358 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.074µs
	I1217 02:03:06.646552 1494358 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646560 1494358 cache.go:107] acquiring lock: {Name:mk3a7027108fb6cda418f0aea932fdb404491198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646585 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 02:03:06.646589 1494358 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.105µs
	I1217 02:03:06.646596 1494358 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 02:03:06.646606 1494358 cache.go:107] acquiring lock: {Name:mkbcf0cf66af7f52acaeaf88186edd5961eb7fb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646635 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1217 02:03:06.646639 1494358 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 35.028µs
	I1217 02:03:06.646645 1494358 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1217 02:03:06.646653 1494358 cache.go:107] acquiring lock: {Name:mk85e5e85708e9527e64bdd95012aff390add343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646678 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 02:03:06.646682 1494358 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.031µs
	I1217 02:03:06.646688 1494358 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 02:03:06.646693 1494358 cache.go:87] Successfully saved all images to host disk.
	I1217 02:03:06.665484 1494358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:03:06.665506 1494358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:03:06.665526 1494358 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:03:06.665557 1494358 start.go:360] acquireMachinesLock for no-preload-178365: {Name:mkd4a1763d090ac24f95097d34ac035f597ec2f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.665618 1494358 start.go:364] duration metric: took 39.672µs to acquireMachinesLock for "no-preload-178365"
	I1217 02:03:06.665659 1494358 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:03:06.665665 1494358 fix.go:54] fixHost starting: 
	I1217 02:03:06.665948 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.681763 1494358 fix.go:112] recreateIfNeeded on no-preload-178365: state=Stopped err=<nil>
	W1217 02:03:06.681790 1494358 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:03:06.685089 1494358 out.go:252] * Restarting existing docker container for "no-preload-178365" ...
	I1217 02:03:06.685169 1494358 cli_runner.go:164] Run: docker start no-preload-178365
	I1217 02:03:06.958594 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.983526 1494358 kic.go:430] container "no-preload-178365" state is running.
	I1217 02:03:06.983925 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:07.006615 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:07.006877 1494358 machine.go:94] provisionDockerMachine start ...
	I1217 02:03:07.006940 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:07.027938 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:07.028270 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:07.028285 1494358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:03:07.028921 1494358 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42386->127.0.0.1:34254: read: connection reset by peer
	I1217 02:03:10.169609 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.169636 1494358 ubuntu.go:182] provisioning hostname "no-preload-178365"
	I1217 02:03:10.169740 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.194161 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.194504 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.194521 1494358 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-178365 && echo "no-preload-178365" | sudo tee /etc/hostname
	I1217 02:03:10.335145 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.335254 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.353261 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.353619 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.353703 1494358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178365/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:03:10.485869 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:03:10.485894 1494358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:03:10.485923 1494358 ubuntu.go:190] setting up certificates
	I1217 02:03:10.485939 1494358 provision.go:84] configureAuth start
	I1217 02:03:10.485997 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:10.502661 1494358 provision.go:143] copyHostCerts
	I1217 02:03:10.502746 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:03:10.502761 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:03:10.502842 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:03:10.502943 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:03:10.502955 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:03:10.502981 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:03:10.503037 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:03:10.503046 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:03:10.503070 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:03:10.503118 1494358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.no-preload-178365 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-178365]
	I1217 02:03:10.769670 1494358 provision.go:177] copyRemoteCerts
	I1217 02:03:10.769739 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:03:10.769777 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.789688 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:10.886311 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:03:10.907152 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:03:10.927302 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:03:10.945784 1494358 provision.go:87] duration metric: took 459.830227ms to configureAuth
	I1217 02:03:10.945813 1494358 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:03:10.946051 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:10.946065 1494358 machine.go:97] duration metric: took 3.939178962s to provisionDockerMachine
	I1217 02:03:10.946075 1494358 start.go:293] postStartSetup for "no-preload-178365" (driver="docker")
	I1217 02:03:10.946086 1494358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:03:10.946141 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:03:10.946189 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.963795 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.062181 1494358 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:03:11.066171 1494358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:03:11.066203 1494358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:03:11.066214 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:03:11.066271 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:03:11.066354 1494358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:03:11.066460 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:03:11.074455 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:11.096806 1494358 start.go:296] duration metric: took 150.715868ms for postStartSetup
	I1217 02:03:11.096935 1494358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:03:11.096985 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.115904 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.210914 1494358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:03:11.216447 1494358 fix.go:56] duration metric: took 4.550774061s for fixHost
	I1217 02:03:11.216474 1494358 start.go:83] releasing machines lock for "no-preload-178365", held for 4.550845758s
	I1217 02:03:11.216552 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:11.234013 1494358 ssh_runner.go:195] Run: cat /version.json
	I1217 02:03:11.234074 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.234105 1494358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:03:11.234160 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.254634 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.261745 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.349529 1494358 ssh_runner.go:195] Run: systemctl --version
	I1217 02:03:11.444567 1494358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:03:11.448907 1494358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:03:11.448999 1494358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:03:11.456651 1494358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:03:11.456676 1494358 start.go:496] detecting cgroup driver to use...
	I1217 02:03:11.456715 1494358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:03:11.456766 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:03:11.474180 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:03:11.487871 1494358 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:03:11.487945 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:03:11.503199 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:03:11.516179 1494358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:03:11.649581 1494358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:03:11.774192 1494358 docker.go:234] disabling docker service ...
	I1217 02:03:11.774263 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:03:11.789517 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:03:11.802804 1494358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:03:11.921518 1494358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:03:12.041333 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:03:12.054806 1494358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:03:12.068814 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:03:12.078910 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:03:12.088243 1494358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:03:12.088356 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:03:12.097152 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.106832 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:03:12.116858 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.126506 1494358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:03:12.134817 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:03:12.143713 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:03:12.152423 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:03:12.161395 1494358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:03:12.169023 1494358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:03:12.176758 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.290497 1494358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:03:12.413211 1494358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:03:12.413339 1494358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:03:12.419446 1494358 start.go:564] Will wait 60s for crictl version
	I1217 02:03:12.419560 1494358 ssh_runner.go:195] Run: which crictl
	I1217 02:03:12.423782 1494358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:03:12.453204 1494358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:03:12.453355 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.477890 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.502488 1494358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:03:12.505409 1494358 cli_runner.go:164] Run: docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:03:12.525803 1494358 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 02:03:12.529636 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.539141 1494358 kubeadm.go:884] updating cluster {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:03:12.539268 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:12.539323 1494358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:03:12.567893 1494358 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:03:12.567915 1494358 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:03:12.567927 1494358 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:03:12.568032 1494358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-178365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:03:12.568100 1494358 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:03:12.593237 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:12.593259 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:12.593281 1494358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:03:12.593303 1494358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178365 NodeName:no-preload-178365 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:03:12.593419 1494358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-178365"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:03:12.593487 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:03:12.601250 1494358 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:03:12.601320 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:03:12.608723 1494358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:03:12.621096 1494358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:03:12.634046 1494358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 02:03:12.646740 1494358 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:03:12.650274 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.660396 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.777431 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:12.794901 1494358 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365 for IP: 192.168.76.2
	I1217 02:03:12.794977 1494358 certs.go:195] generating shared ca certs ...
	I1217 02:03:12.795010 1494358 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:12.795186 1494358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:03:12.795275 1494358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:03:12.795305 1494358 certs.go:257] generating profile certs ...
	I1217 02:03:12.795455 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key
	I1217 02:03:12.795549 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2
	I1217 02:03:12.795620 1494358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key
	I1217 02:03:12.795764 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:03:12.795825 1494358 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:03:12.795852 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:03:12.795904 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:03:12.795962 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:03:12.796010 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:03:12.796087 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:12.796737 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:03:12.814980 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:03:12.832753 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:03:12.850216 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:03:12.868173 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:03:12.886289 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 02:03:12.903326 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:03:12.920371 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:03:12.940578 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:03:12.957601 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:03:12.974697 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:03:12.991288 1494358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:03:13.004811 1494358 ssh_runner.go:195] Run: openssl version
	I1217 02:03:13.011807 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.019338 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:03:13.027129 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030736 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030806 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.071860 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:03:13.079209 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.086171 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:03:13.093446 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.097994 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.098062 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.140311 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:03:13.148478 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.156400 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:03:13.164489 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168307 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168376 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.213768 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:03:13.221877 1494358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:03:13.225450 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:03:13.267131 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:03:13.308825 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:03:13.351204 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:03:13.393248 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:03:13.434439 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:03:13.475429 1494358 kubeadm.go:401] StartCluster: {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:13.475532 1494358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:03:13.475608 1494358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:03:13.504535 1494358 cri.go:89] found id: ""
	I1217 02:03:13.504615 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:03:13.512496 1494358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:03:13.512516 1494358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:03:13.512598 1494358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:03:13.520493 1494358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:03:13.520944 1494358 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.521050 1494358 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-178365" cluster setting kubeconfig missing "no-preload-178365" context setting]
	I1217 02:03:13.521320 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.522699 1494358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:03:13.530620 1494358 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 02:03:13.530655 1494358 kubeadm.go:602] duration metric: took 18.132356ms to restartPrimaryControlPlane
	I1217 02:03:13.530665 1494358 kubeadm.go:403] duration metric: took 55.248466ms to StartCluster
	I1217 02:03:13.530680 1494358 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.530739 1494358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.531369 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.531580 1494358 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:03:13.531879 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:13.531927 1494358 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:03:13.531992 1494358 addons.go:70] Setting storage-provisioner=true in profile "no-preload-178365"
	I1217 02:03:13.532007 1494358 addons.go:239] Setting addon storage-provisioner=true in "no-preload-178365"
	I1217 02:03:13.532031 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.532492 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.532869 1494358 addons.go:70] Setting dashboard=true in profile "no-preload-178365"
	I1217 02:03:13.532892 1494358 addons.go:239] Setting addon dashboard=true in "no-preload-178365"
	W1217 02:03:13.532899 1494358 addons.go:248] addon dashboard should already be in state true
	I1217 02:03:13.532921 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.533338 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.534314 1494358 addons.go:70] Setting default-storageclass=true in profile "no-preload-178365"
	I1217 02:03:13.534373 1494358 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-178365"
	I1217 02:03:13.534686 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.538786 1494358 out.go:179] * Verifying Kubernetes components...
	I1217 02:03:13.541864 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:13.565785 1494358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:03:13.568681 1494358 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.568703 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:03:13.568768 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.578846 1494358 addons.go:239] Setting addon default-storageclass=true in "no-preload-178365"
	I1217 02:03:13.578886 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.579340 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.579557 1494358 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:03:13.582535 1494358 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:03:13.585382 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:03:13.585433 1494358 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:03:13.585542 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.608796 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.639282 1494358 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.639307 1494358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:03:13.639371 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.653415 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.673307 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.775641 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.801572 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:13.824171 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:03:13.824193 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:03:13.841637 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:03:13.841671 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:03:13.855261 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:03:13.855283 1494358 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:03:13.874375 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:03:13.874398 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1217 02:03:13.875947 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.875994 1494358 retry.go:31] will retry after 288.181294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.892373 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.907211 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:03:13.907237 1494358 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:03:13.935844 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:03:13.935871 1494358 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:03:13.961470 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:03:13.961495 1494358 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:03:13.976000 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:03:13.976025 1494358 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:03:13.992266 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:13.992291 1494358 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:03:14.009756 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:14.164994 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:14.633552 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633588 1494358 retry.go:31] will retry after 357.626005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633797 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633814 1494358 retry.go:31] will retry after 154.442663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633867 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633885 1494358 retry.go:31] will retry after 536.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633975 1494358 node_ready.go:35] waiting up to 6m0s for node "no-preload-178365" to be "Ready" ...
	I1217 02:03:14.788822 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:14.850646 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.850682 1494358 retry.go:31] will retry after 194.97222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.992099 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:15.046507 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.089856 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.089896 1494358 retry.go:31] will retry after 200.825401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:15.123044 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.123092 1494358 retry.go:31] will retry after 471.273084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.171850 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:15.233255 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.233288 1494358 retry.go:31] will retry after 740.372196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.291633 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:15.354957 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.354993 1494358 retry.go:31] will retry after 685.879549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.595477 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.661175 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.661206 1494358 retry.go:31] will retry after 918.180528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.974527 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.041010 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:16.041109 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.041153 1494358 retry.go:31] will retry after 922.351729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:16.101618 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.101745 1494358 retry.go:31] will retry after 895.690357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.580236 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:16.635003 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:16.644295 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.644331 1494358 retry.go:31] will retry after 1.757458355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.963859 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.998199 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.029017 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.029053 1494358 retry.go:31] will retry after 1.200975191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:17.065693 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.065740 1494358 retry.go:31] will retry after 733.467842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.799468 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.857813 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.857844 1494358 retry.go:31] will retry after 1.598089082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.230995 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:18.288826 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.288856 1494358 retry.go:31] will retry after 1.072359143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.402269 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:18.499311 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.499346 1494358 retry.go:31] will retry after 1.974986181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:19.135143 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:19.361610 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:19.424580 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.424614 1494358 retry.go:31] will retry after 2.619930526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.456891 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:19.529540 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.529572 1494358 retry.go:31] will retry after 4.103816404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.475130 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:20.538062 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.538103 1494358 retry.go:31] will retry after 4.176264138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:21.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:22.045549 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:22.113264 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:22.113297 1494358 retry.go:31] will retry after 6.243728004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.634510 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:23.724320 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.724355 1494358 retry.go:31] will retry after 2.344494398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:24.135189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:24.715564 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:24.778897 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:24.778930 1494358 retry.go:31] will retry after 6.21195427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.069135 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:26.129417 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.129453 1494358 retry.go:31] will retry after 7.88915894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:26.635049 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:28.357601 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:28.414285 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:28.414314 1494358 retry.go:31] will retry after 8.141385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:29.135171 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:30.991983 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:31.086857 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:31.086888 1494358 retry.go:31] will retry after 8.346677944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:31.634434 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:33.635098 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:34.019715 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:34.102746 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:34.102779 1494358 retry.go:31] will retry after 12.223918915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:35.635237 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:36.555986 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:36.613803 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:36.613840 1494358 retry.go:31] will retry after 13.520296046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:37.635411 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:39.434738 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:39.504274 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:39.504305 1494358 retry.go:31] will retry after 11.467503434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:40.134513 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:42.134733 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:44.634504 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:46.326949 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:46.388052 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:46.388081 1494358 retry.go:31] will retry after 12.584899893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:46.634980 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:49.135190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:50.134912 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:50.199930 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:50.199964 1494358 retry.go:31] will retry after 18.31448087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:50.972298 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:51.035555 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:51.035596 1494358 retry.go:31] will retry after 17.961716988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:51.635239 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:54.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:56.135361 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:58.635047 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:58.973235 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:59.036686 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:59.036719 1494358 retry.go:31] will retry after 12.655603579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:00.635471 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:03.135287 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:05.135412 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:07.635430 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:08.515014 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:04:08.573240 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:08.573272 1494358 retry.go:31] will retry after 21.601228237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:08.998393 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:09.061840 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:09.061874 1494358 retry.go:31] will retry after 17.025396452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:09.635497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:11.692476 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:04:11.748218 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:11.748251 1494358 retry.go:31] will retry after 27.44869176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:12.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:14.635195 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:17.135236 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:19.635297 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:22.135202 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:24.135479 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:26.088208 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:26.153388 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:26.153423 1494358 retry.go:31] will retry after 19.325825262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:26.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:29.135233 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:30.175522 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:04:30.235853 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:30.235960 1494358 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:04:31.635004 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:34.134551 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:36.135469 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:38.635140 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:39.197500 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:04:39.262887 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:39.262988 1494358 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:04:40.635609 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:43.134500 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:45.135676 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:45.480145 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:45.552840 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:45.552964 1494358 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:04:45.556171 1494358 out.go:179] * Enabled addons: 
	I1217 02:04:45.558951 1494358 addons.go:530] duration metric: took 1m32.027017156s for enable addons: enabled=[]
	W1217 02:04:47.635184 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:49.635373 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:52.135104 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:54.634638 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:57.134475 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:59.135391 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:01.635489 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:04.135231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:06.635570 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:09.135232 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:11.635157 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:14.135122 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:16.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:18.635204 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:21.135093 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:23.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:25.634571 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:27.635077 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:29.635231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:32.134521 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:34.135119 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:36.135210 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:38.635154 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:41.134707 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:43.635439 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:46.135272 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:48.634542 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:50.635081 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:53.135083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:55.135448 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:57.635130 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:00.134795 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:02.135223 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:04.634982 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:07.134988 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:09.135112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:11.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:13.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:16.134662 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:18.135126 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:20.135188 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:22.635190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:25.135234 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:27.635261 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:30.135207 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:32.135482 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:34.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:36.635304 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:39.135165 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:41.135205 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:43.635083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:45.635371 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:48.135178 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:50.635159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:52.635229 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:54.635273 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:57.134604 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:59.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:01.635378 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:04.134753 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:06.135163 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:08.135245 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:10.635146 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:13.135257 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:15.635347 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:18.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:20.135199 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:22.635112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:25.134718 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:27.634506 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:29.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:32.134594 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:34.135393 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:36.635067 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:38.635141 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:40.635215 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:43.135159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:45.135226 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:47.635175 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:50.134634 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:52.135139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:54.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:56.634914 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:58.635189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:01.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:03.634990 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:05.635168 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:07.635213 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:10.134640 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:12.635367 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:14.635409 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:17.135481 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:19.635228 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:22.134985 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:24.135180 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:26.135497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:28.635158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:30.635316 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:33.135099 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:35.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:37.635249 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:40.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:42.135661 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:44.635135 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:47.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:49.634635 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:52.134591 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:54.134633 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:56.634544 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:58.634782 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:01.135356 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:03.634729 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:06.135608 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:08.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:10.635438 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:13.134531 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:14.634797 1494358 node_ready.go:38] duration metric: took 6m0.000749408s for node "no-preload-178365" to be "Ready" ...
	I1217 02:09:14.638073 1494358 out.go:203] 
	W1217 02:09:14.640977 1494358 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:09:14.641013 1494358 out.go:285] * 
	* 
	W1217 02:09:14.643229 1494358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:09:14.646121 1494358 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1494487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:03:06.71743355Z",
	            "FinishedAt": "2025-12-17T02:03:05.348756992Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9255e0863872038f878a0377593d952443e5d8a7e0d1715541fab06d752ef770",
	            "SandboxKey": "/var/run/docker/netns/9255e0863872",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9e:f4:59:45:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "02e66a97e08a8d712f4ba9f711db1ac614b5e96335d8aceb3d7eccb7c2a2e478",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 2 (342.077668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-178365 logs -n 25: (1.145552658s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p newest-cni-456492 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-456492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:12.850501 1498704 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:05:12.850637 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.850649 1498704 out.go:374] Setting ErrFile to fd 2...
	I1217 02:05:12.850655 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.851041 1498704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:05:12.851511 1498704 out.go:368] Setting JSON to false
	I1217 02:05:12.852479 1498704 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28063,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:05:12.852572 1498704 start.go:143] virtualization:  
	I1217 02:05:12.855474 1498704 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:05:12.857672 1498704 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:12.857773 1498704 notify.go:221] Checking for updates...
	I1217 02:05:12.863254 1498704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:12.866037 1498704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:12.868948 1498704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:05:12.871863 1498704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:05:12.874787 1498704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:12.878103 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:12.878662 1498704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:12.900447 1498704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:05:12.900598 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:12.960234 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:12.950894493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:12.960347 1498704 docker.go:319] overlay module found
	I1217 02:05:12.963370 1498704 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:12.966210 1498704 start.go:309] selected driver: docker
	I1217 02:05:12.966233 1498704 start.go:927] validating driver "docker" against &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:12.966382 1498704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:12.967091 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:13.019814 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:13.010546439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:13.020178 1498704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:05:13.020210 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:13.020262 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:13.020307 1498704 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:13.023434 1498704 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 02:05:13.026234 1498704 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:05:13.029131 1498704 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:13.031994 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:13.032048 1498704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 02:05:13.032060 1498704 cache.go:65] Caching tarball of preloaded images
	I1217 02:05:13.032113 1498704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:13.032150 1498704 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:05:13.032162 1498704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 02:05:13.032281 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.052501 1498704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:13.052525 1498704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:13.052542 1498704 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:13.052572 1498704 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:13.052633 1498704 start.go:364] duration metric: took 38.597µs to acquireMachinesLock for "newest-cni-456492"
	I1217 02:05:13.052657 1498704 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:13.052663 1498704 fix.go:54] fixHost starting: 
	I1217 02:05:13.052926 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.069585 1498704 fix.go:112] recreateIfNeeded on newest-cni-456492: state=Stopped err=<nil>
	W1217 02:05:13.069617 1498704 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 02:05:11.635157 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:14.135122 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:16.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:13.072747 1498704 out.go:252] * Restarting existing docker container for "newest-cni-456492" ...
	I1217 02:05:13.072837 1498704 cli_runner.go:164] Run: docker start newest-cni-456492
	I1217 02:05:13.388698 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.414091 1498704 kic.go:430] container "newest-cni-456492" state is running.
	I1217 02:05:13.414525 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:13.433261 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.433961 1498704 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:13.434162 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:13.455043 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:13.455367 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:13.455376 1498704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:13.456190 1498704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:16.589394 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.589424 1498704 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 02:05:16.589509 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.608291 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.608611 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.608628 1498704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 02:05:16.748318 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.748417 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.766749 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.767082 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.767106 1498704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:16.899757 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:16.899788 1498704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:05:16.899820 1498704 ubuntu.go:190] setting up certificates
	I1217 02:05:16.899839 1498704 provision.go:84] configureAuth start
	I1217 02:05:16.899906 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:16.924665 1498704 provision.go:143] copyHostCerts
	I1217 02:05:16.924743 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:05:16.924752 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:05:16.924828 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:05:16.924938 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:05:16.924943 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:05:16.924976 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:05:16.925038 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:05:16.925047 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:05:16.925072 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:05:16.925127 1498704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 02:05:17.601803 1498704 provision.go:177] copyRemoteCerts
	I1217 02:05:17.601873 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:17.601926 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.636357 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.741722 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:05:17.761034 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:17.779707 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:17.797837 1498704 provision.go:87] duration metric: took 897.968313ms to configureAuth
	I1217 02:05:17.797870 1498704 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:17.798087 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:17.798100 1498704 machine.go:97] duration metric: took 4.364124237s to provisionDockerMachine
	I1217 02:05:17.798118 1498704 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 02:05:17.798134 1498704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:17.798198 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:17.798254 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.815970 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.909838 1498704 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:17.913351 1498704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:17.913383 1498704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:17.913395 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:05:17.913453 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:05:17.913544 1498704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:05:17.913681 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:17.921360 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:17.939679 1498704 start.go:296] duration metric: took 141.5414ms for postStartSetup
	I1217 02:05:17.939826 1498704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:17.939877 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.957594 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.059706 1498704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:18.065122 1498704 fix.go:56] duration metric: took 5.012436797s for fixHost
	I1217 02:05:18.065156 1498704 start.go:83] releasing machines lock for "newest-cni-456492", held for 5.012509749s
	I1217 02:05:18.065242 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:18.082756 1498704 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:18.082825 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.083064 1498704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:05:18.083126 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.102210 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.102306 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.193581 1498704 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:18.286865 1498704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:18.291506 1498704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:18.291604 1498704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:18.301001 1498704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:18.301023 1498704 start.go:496] detecting cgroup driver to use...
	I1217 02:05:18.301056 1498704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:18.301104 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:05:18.318916 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:18.332388 1498704 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:05:18.332450 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:05:18.348560 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:05:18.361841 1498704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:05:18.501489 1498704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:05:18.625467 1498704 docker.go:234] disabling docker service ...
	I1217 02:05:18.625544 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:05:18.642408 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:05:18.656014 1498704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:05:18.765362 1498704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:05:18.886790 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:18.900617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:18.915221 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:18.924900 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:18.934313 1498704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:18.934389 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:18.943795 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.953183 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:18.962127 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.971122 1498704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:18.979419 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:18.988380 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:18.999817 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:19.010244 1498704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:19.018996 1498704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:19.026929 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.133908 1498704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:19.268405 1498704 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:05:19.268490 1498704 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:05:19.272284 1498704 start.go:564] Will wait 60s for crictl version
	I1217 02:05:19.272347 1498704 ssh_runner.go:195] Run: which crictl
	I1217 02:05:19.275756 1498704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:19.301130 1498704 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:05:19.301201 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.322372 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.348617 1498704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:05:19.351633 1498704 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:05:19.367774 1498704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:05:19.371830 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.384786 1498704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:05:19.387816 1498704 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:19.387972 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:19.388067 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.414283 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.414309 1498704 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:05:19.414396 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.439246 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.439272 1498704 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:19.439280 1498704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:05:19.439400 1498704 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:19.439475 1498704 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:05:19.464932 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:19.464957 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:19.464978 1498704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:05:19.465000 1498704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:19.465118 1498704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:19.465204 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:19.473220 1498704 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:19.473323 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:19.481191 1498704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:05:19.494733 1498704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:19.508255 1498704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 02:05:19.521299 1498704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:19.524923 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.534869 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.640328 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:19.658104 1498704 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 02:05:19.658171 1498704 certs.go:195] generating shared ca certs ...
	I1217 02:05:19.658202 1498704 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:19.658408 1498704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:05:19.658487 1498704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:05:19.658525 1498704 certs.go:257] generating profile certs ...
	I1217 02:05:19.658693 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 02:05:19.658805 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 02:05:19.658882 1498704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 02:05:19.659021 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:05:19.659079 1498704 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:19.659103 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:05:19.659164 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:05:19.659220 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:05:19.659286 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:05:19.659364 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:19.660007 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:19.680759 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:05:19.702848 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:19.724636 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:05:19.743745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:19.766745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:19.785567 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:19.805217 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:05:19.823885 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:05:19.842565 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:05:19.861136 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:19.881009 1498704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:19.900011 1498704 ssh_runner.go:195] Run: openssl version
	I1217 02:05:19.907885 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.916589 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:19.925294 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929759 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929879 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.973048 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:19.981056 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:05:19.988859 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:05:19.996704 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001580 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001857 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.047306 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:20.055839 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.063938 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:05:20.072095 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076535 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076605 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.118765 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:20.126976 1498704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:20.131206 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:20.172934 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:20.214362 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:20.255854 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:20.297036 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:20.339864 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:20.381722 1498704 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:20.381822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:05:20.381904 1498704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:05:20.424644 1498704 cri.go:89] found id: ""
	I1217 02:05:20.424764 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:20.433427 1498704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:20.433456 1498704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:20.433550 1498704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:20.441251 1498704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:20.442099 1498704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.442456 1498704 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456492" cluster setting kubeconfig missing "newest-cni-456492" context setting]
	I1217 02:05:20.442986 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.445078 1498704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:20.453918 1498704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:05:20.453968 1498704 kubeadm.go:602] duration metric: took 20.505601ms to restartPrimaryControlPlane
	I1217 02:05:20.453978 1498704 kubeadm.go:403] duration metric: took 72.266987ms to StartCluster
	I1217 02:05:20.453993 1498704 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.454058 1498704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.454938 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.455145 1498704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:05:20.455516 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:20.455530 1498704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:20.455683 1498704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456492"
	I1217 02:05:20.455704 1498704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456492"
	I1217 02:05:20.455734 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456291 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.456447 1498704 addons.go:70] Setting dashboard=true in profile "newest-cni-456492"
	I1217 02:05:20.456459 1498704 addons.go:239] Setting addon dashboard=true in "newest-cni-456492"
	W1217 02:05:20.456465 1498704 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:20.456487 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456873 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.457295 1498704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456492"
	I1217 02:05:20.457327 1498704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456492"
	I1217 02:05:20.457617 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.460758 1498704 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:20.464032 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:20.511072 1498704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:20.511238 1498704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:20.511526 1498704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456492"
	I1217 02:05:20.511584 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.512215 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.514400 1498704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.514426 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:20.514495 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.517419 1498704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 02:05:18.635204 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:21.135093 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:20.520345 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:20.520380 1498704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:20.520470 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.545933 1498704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.545958 1498704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:20.546028 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.571506 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.597655 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.610038 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.744231 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:20.749535 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.770211 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.807578 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:20.807656 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:20.822894 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:20.822966 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:20.838508 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:20.838583 1498704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:20.854473 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:20.854546 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:20.870442 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:20.870510 1498704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:20.892689 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:20.892763 1498704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:20.907212 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:20.907283 1498704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:05:20.920377 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:20.920447 1498704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:20.934242 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:20.934313 1498704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:20.949356 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:21.122136 1498704 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:05:21.122238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:21.122377 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122428 1498704 retry.go:31] will retry after 140.698925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122498 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122514 1498704 retry.go:31] will retry after 200.872114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122730 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122750 1498704 retry.go:31] will retry after 347.753215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.264115 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.324524 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:21.326955 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.326987 1498704 retry.go:31] will retry after 509.503403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.390952 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.391056 1498704 retry.go:31] will retry after 486.50092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.471226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.536155 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.536193 1498704 retry.go:31] will retry after 374.340896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.623199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:21.836797 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.878378 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:21.911452 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.932525 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.932573 1498704 retry.go:31] will retry after 673.446858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.024062 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.024104 1498704 retry.go:31] will retry after 357.640722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.030810 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.030855 1498704 retry.go:31] will retry after 697.108634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.122842 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:22.382402 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.447494 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.447529 1498704 retry.go:31] will retry after 907.58474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.606794 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:22.623237 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:22.712284 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.712316 1498704 retry.go:31] will retry after 1.166453431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.728640 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.790257 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.790294 1498704 retry.go:31] will retry after 693.242896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:23.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:25.634571 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:23.122710 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.356122 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:23.441808 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.441876 1498704 retry.go:31] will retry after 812.660244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.484193 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:23.553009 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.553088 1498704 retry.go:31] will retry after 1.540590446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.878932 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:23.940625 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.940657 1498704 retry.go:31] will retry after 1.715347401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.123129 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:24.255570 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.318166 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.318201 1498704 retry.go:31] will retry after 2.528105033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.622416 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.094702 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.122740 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:25.190434 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.190468 1498704 retry.go:31] will retry after 2.137532007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.622874 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.656976 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.735191 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.735228 1498704 retry.go:31] will retry after 1.824141068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.122718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.622402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.847039 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:26.915825 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.915864 1498704 retry.go:31] will retry after 3.628983163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.123109 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:27.329106 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:27.406949 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.406981 1498704 retry.go:31] will retry after 4.03347247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.560441 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:27.620941 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.620972 1498704 retry.go:31] will retry after 3.991176553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.623048 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:27.635077 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:29.635231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:28.123323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.622690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.123056 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.622383 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.122331 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.545057 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:30.621785 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.621822 1498704 retry.go:31] will retry after 4.4452238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.622853 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.122373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.440743 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:31.509992 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.510031 1498704 retry.go:31] will retry after 5.407597033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.613135 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:31.622584 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:31.697739 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.697776 1498704 retry.go:31] will retry after 2.825488937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:32.122427 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:32.622356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:32.134521 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:34.135119 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:36.135210 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:33.122865 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.622376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.122833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.523532 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:34.583134 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.583163 1498704 retry.go:31] will retry after 5.545323918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.622442 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:35.068147 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:35.122850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:35.134133 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.134169 1498704 retry.go:31] will retry after 4.861802964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.622377 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.122369 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.622378 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.918683 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:36.978447 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.978481 1498704 retry.go:31] will retry after 6.962519237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:37.122560 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:37.622836 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:38.635154 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:41.134707 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:38.122524 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.622862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.122871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.623166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.996206 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:40.063255 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.063292 1498704 retry.go:31] will retry after 7.781680021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.122526 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:40.129164 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:40.214505 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.214533 1498704 retry.go:31] will retry after 8.678807682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.622298 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.122333 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.622358 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.127159 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.622438 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:43.635439 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:46.135272 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:43.122461 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.622352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.941994 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:44.001689 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.001730 1498704 retry.go:31] will retry after 6.066883065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.123123 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:44.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.126164 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.623052 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.122898 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.622334 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.122393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.622323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.845223 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:48.634542 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:50.635081 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:47.908667 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:47.908705 1498704 retry.go:31] will retry after 18.007710991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.122861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.622412 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.894229 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:48.969090 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.969125 1498704 retry.go:31] will retry after 16.055685136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.122381 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:49.622837 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:50.069336 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:50.122996 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:50.134357 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.134397 1498704 retry.go:31] will retry after 18.576318696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.622399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.122356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.623152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.122522 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.622365 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:53.135083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:55.135448 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:53.123228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.622373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.622394 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.122388 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.122434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.622357 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.122345 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.622407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:57.635130 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:00.134795 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:58.122690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.622871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.122944 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.622822 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.123626 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.623133 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.122517 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.622861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.122995 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.622415 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:02.135223 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:04.634982 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:03.122366 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.623001 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.122805 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.622382 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.025226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:05.088234 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.088268 1498704 retry.go:31] will retry after 18.521411157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.122353 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.622518 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.916578 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:05.977704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.977737 1498704 retry.go:31] will retry after 29.235613176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.123051 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:06.623116 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.122863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.622361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:07.134988 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:09.135112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:11.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:08.123131 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.622326 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.711597 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:08.773115 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:08.773147 1498704 retry.go:31] will retry after 24.92518591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:09.122643 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:09.622393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.122375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.622634 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.122959 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.622850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.122346 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.622435 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:13.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:16.134662 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:13.122648 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.622828 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.123317 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.622872 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.122361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.622296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.622835 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.122778 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:18.135126 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:20.135188 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:18.123152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.623163 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.122407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.622841 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.123196 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.622898 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:20.622982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:20.655063 1498704 cri.go:89] found id: ""
	I1217 02:06:20.655091 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.655100 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:20.655106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:20.655169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:20.687901 1498704 cri.go:89] found id: ""
	I1217 02:06:20.687924 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.687932 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:20.687938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:20.687996 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:20.713818 1498704 cri.go:89] found id: ""
	I1217 02:06:20.713845 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.713854 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:20.713860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:20.713918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:20.738353 1498704 cri.go:89] found id: ""
	I1217 02:06:20.738376 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.738384 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:20.738396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:20.738455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:20.763275 1498704 cri.go:89] found id: ""
	I1217 02:06:20.763300 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.763309 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:20.763316 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:20.763377 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:20.787303 1498704 cri.go:89] found id: ""
	I1217 02:06:20.787328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.787337 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:20.787343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:20.787402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:20.812203 1498704 cri.go:89] found id: ""
	I1217 02:06:20.812230 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.812238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:20.812244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:20.812304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:20.836788 1498704 cri.go:89] found id: ""
	I1217 02:06:20.836814 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.836823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:20.836831 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:20.836842 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:20.901301 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:20.901324 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:20.901337 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:20.927207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:20.927244 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:20.955351 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:20.955377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:21.010892 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:21.010928 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:22.635190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:25.135234 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:23.526340 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:23.536950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:23.537021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:23.561240 1498704 cri.go:89] found id: ""
	I1217 02:06:23.561267 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.561276 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:23.561282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:23.561340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:23.586385 1498704 cri.go:89] found id: ""
	I1217 02:06:23.586407 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.586415 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:23.586421 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:23.586479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:23.610820 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:06:23.612177 1498704 cri.go:89] found id: ""
	I1217 02:06:23.612201 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.612210 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:23.612216 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:23.612270 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1217 02:06:23.698147 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698227 1498704 retry.go:31] will retry after 35.769421328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698299 1498704 cri.go:89] found id: ""
	I1217 02:06:23.698328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.698348 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:23.698379 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:23.698473 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:23.730479 1498704 cri.go:89] found id: ""
	I1217 02:06:23.730555 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.730569 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:23.730577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:23.730656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:23.757694 1498704 cri.go:89] found id: ""
	I1217 02:06:23.757717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.757726 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:23.757732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:23.757802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:23.787070 1498704 cri.go:89] found id: ""
	I1217 02:06:23.787145 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.787162 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:23.787170 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:23.787231 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:23.815895 1498704 cri.go:89] found id: ""
	I1217 02:06:23.815928 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.815937 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:23.815947 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:23.815977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:23.845530 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:23.845558 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:23.904348 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:23.904385 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.919409 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:23.919438 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:23.986183 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:23.986246 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:23.986266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:26.512910 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:26.523572 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:26.523644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:26.549045 1498704 cri.go:89] found id: ""
	I1217 02:06:26.549077 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.549087 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:26.549100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:26.549181 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:26.573386 1498704 cri.go:89] found id: ""
	I1217 02:06:26.573409 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.573417 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:26.573423 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:26.573485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:26.597629 1498704 cri.go:89] found id: ""
	I1217 02:06:26.597673 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.597688 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:26.597695 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:26.597755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:26.625905 1498704 cri.go:89] found id: ""
	I1217 02:06:26.625933 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.625942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:26.625949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:26.626016 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:26.663442 1498704 cri.go:89] found id: ""
	I1217 02:06:26.663466 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.663475 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:26.663482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:26.663565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:26.692315 1498704 cri.go:89] found id: ""
	I1217 02:06:26.692342 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.692351 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:26.692362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:26.692422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:26.718259 1498704 cri.go:89] found id: ""
	I1217 02:06:26.718287 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.718296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:26.718303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:26.718361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:26.743360 1498704 cri.go:89] found id: ""
	I1217 02:06:26.743383 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.743391 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:26.743400 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:26.743412 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:26.770132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:26.770158 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:26.829657 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:26.829749 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:26.845511 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:26.845538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:26.912984 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:26.913004 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:26.913017 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:27.635261 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:30.135207 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:29.440066 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:29.450548 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:29.450621 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:29.474768 1498704 cri.go:89] found id: ""
	I1217 02:06:29.474800 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.474809 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:29.474816 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:29.474886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:29.498947 1498704 cri.go:89] found id: ""
	I1217 02:06:29.498969 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.498977 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:29.498983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:29.499041 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:29.523540 1498704 cri.go:89] found id: ""
	I1217 02:06:29.523564 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.523573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:29.523579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:29.523643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:29.556044 1498704 cri.go:89] found id: ""
	I1217 02:06:29.556069 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.556078 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:29.556084 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:29.556144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:29.581373 1498704 cri.go:89] found id: ""
	I1217 02:06:29.581399 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.581408 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:29.581414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:29.581485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:29.607453 1498704 cri.go:89] found id: ""
	I1217 02:06:29.607479 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.607489 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:29.607495 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:29.607576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:29.639841 1498704 cri.go:89] found id: ""
	I1217 02:06:29.639865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.639875 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:29.639881 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:29.639938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:29.670608 1498704 cri.go:89] found id: ""
	I1217 02:06:29.670635 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.670643 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:29.670653 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:29.670665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:29.728148 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:29.728181 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:29.743004 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:29.743029 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:29.815740 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:29.815762 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:29.815775 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.842206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:29.842243 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:32.370825 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:32.383399 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:32.383490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:32.416122 1498704 cri.go:89] found id: ""
	I1217 02:06:32.416148 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.416157 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:32.416164 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:32.416235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:32.450068 1498704 cri.go:89] found id: ""
	I1217 02:06:32.450092 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.450101 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:32.450107 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:32.450176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:32.475101 1498704 cri.go:89] found id: ""
	I1217 02:06:32.475126 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.475135 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:32.475142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:32.475218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:32.500347 1498704 cri.go:89] found id: ""
	I1217 02:06:32.500372 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.500380 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:32.500387 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:32.500447 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:32.525315 1498704 cri.go:89] found id: ""
	I1217 02:06:32.525346 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.525355 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:32.525361 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:32.525440 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:32.550267 1498704 cri.go:89] found id: ""
	I1217 02:06:32.550341 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.550358 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:32.550365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:32.550424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:32.575413 1498704 cri.go:89] found id: ""
	I1217 02:06:32.575438 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.575447 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:32.575453 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:32.575559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:32.603477 1498704 cri.go:89] found id: ""
	I1217 02:06:32.603503 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.603513 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:32.603523 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:32.603568 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:32.669699 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:32.669735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:32.686097 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:32.686126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:32.755583 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:32.755604 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:32.755616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:32.782146 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:32.782195 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:32.135482 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:34.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:33.698737 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:33.767478 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:33.767516 1498704 retry.go:31] will retry after 19.401613005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.214860 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:35.276710 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.276741 1498704 retry.go:31] will retry after 25.686831054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.310030 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:35.320395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:35.320472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:35.344503 1498704 cri.go:89] found id: ""
	I1217 02:06:35.344525 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.344533 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:35.344539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:35.344597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:35.375750 1498704 cri.go:89] found id: ""
	I1217 02:06:35.375773 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.375782 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:35.375788 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:35.375857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:35.403776 1498704 cri.go:89] found id: ""
	I1217 02:06:35.403803 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.403813 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:35.403819 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:35.403878 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:35.437584 1498704 cri.go:89] found id: ""
	I1217 02:06:35.437608 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.437616 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:35.437623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:35.437723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:35.467173 1498704 cri.go:89] found id: ""
	I1217 02:06:35.467207 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.467216 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:35.467223 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:35.467289 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:35.491257 1498704 cri.go:89] found id: ""
	I1217 02:06:35.491284 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.491294 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:35.491301 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:35.491380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:35.515935 1498704 cri.go:89] found id: ""
	I1217 02:06:35.515961 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.515971 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:35.515978 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:35.516077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:35.542706 1498704 cri.go:89] found id: ""
	I1217 02:06:35.542730 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.542739 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:35.542748 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:35.542759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:35.601383 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:35.601428 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:35.616228 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:35.616269 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:35.693548 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:35.693569 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:35.693584 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:35.719247 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:35.719286 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:36.635304 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:39.135165 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:41.135205 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:38.250028 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:38.261967 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:38.262037 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:38.286400 1498704 cri.go:89] found id: ""
	I1217 02:06:38.286423 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.286431 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:38.286437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:38.286499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:38.310618 1498704 cri.go:89] found id: ""
	I1217 02:06:38.310639 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.310647 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:38.310654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:38.310713 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:38.335110 1498704 cri.go:89] found id: ""
	I1217 02:06:38.335136 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.335144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:38.335151 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:38.335214 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:38.364179 1498704 cri.go:89] found id: ""
	I1217 02:06:38.364202 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.364211 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:38.364218 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:38.364278 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:38.402338 1498704 cri.go:89] found id: ""
	I1217 02:06:38.402366 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.402374 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:38.402384 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:38.402443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:38.433053 1498704 cri.go:89] found id: ""
	I1217 02:06:38.433081 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.433090 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:38.433096 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:38.433155 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:38.461635 1498704 cri.go:89] found id: ""
	I1217 02:06:38.461688 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.461698 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:38.461704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:38.461767 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:38.486774 1498704 cri.go:89] found id: ""
	I1217 02:06:38.486798 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.486807 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:38.486816 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:38.486827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:38.543417 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:38.543453 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:38.558472 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:38.558499 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:38.627234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:38.627308 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:38.627336 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:38.656399 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:38.656481 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:41.188669 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:41.199463 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:41.199550 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:41.223737 1498704 cri.go:89] found id: ""
	I1217 02:06:41.223762 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.223771 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:41.223778 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:41.223842 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:41.248972 1498704 cri.go:89] found id: ""
	I1217 02:06:41.248998 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.249014 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:41.249022 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:41.249084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:41.274840 1498704 cri.go:89] found id: ""
	I1217 02:06:41.274873 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.274886 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:41.274892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:41.274965 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:41.302162 1498704 cri.go:89] found id: ""
	I1217 02:06:41.302188 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.302197 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:41.302204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:41.302274 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:41.331745 1498704 cri.go:89] found id: ""
	I1217 02:06:41.331771 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.331780 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:41.331786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:41.331872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:41.366507 1498704 cri.go:89] found id: ""
	I1217 02:06:41.366538 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.366559 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:41.366567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:41.366642 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:41.402343 1498704 cri.go:89] found id: ""
	I1217 02:06:41.402390 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.402400 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:41.402409 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:41.402482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:41.442142 1498704 cri.go:89] found id: ""
	I1217 02:06:41.442169 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.442177 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:41.442187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:41.442198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:41.498349 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:41.498432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:41.514261 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:41.514287 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:41.577450 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:41.577470 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:41.577483 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:41.602731 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:41.602766 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:43.635083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:45.635371 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:44.138863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:44.149308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:44.149424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:44.175006 1498704 cri.go:89] found id: ""
	I1217 02:06:44.175031 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.175040 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:44.175047 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:44.175103 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:44.199571 1498704 cri.go:89] found id: ""
	I1217 02:06:44.199596 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.199605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:44.199612 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:44.199669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:44.227289 1498704 cri.go:89] found id: ""
	I1217 02:06:44.227313 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.227323 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:44.227329 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:44.227418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:44.255509 1498704 cri.go:89] found id: ""
	I1217 02:06:44.255549 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.255558 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:44.255564 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:44.255622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:44.282827 1498704 cri.go:89] found id: ""
	I1217 02:06:44.282850 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.282858 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:44.282864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:44.282971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:44.310331 1498704 cri.go:89] found id: ""
	I1217 02:06:44.310354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.310363 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:44.310370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:44.310427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:44.334927 1498704 cri.go:89] found id: ""
	I1217 02:06:44.334952 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.334961 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:44.334968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:44.335068 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:44.359119 1498704 cri.go:89] found id: ""
	I1217 02:06:44.359144 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.359153 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:44.359162 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:44.359192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:44.436966 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:44.436987 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:44.437000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:44.462649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:44.462686 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.492091 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:44.492120 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:44.548670 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:44.548707 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.063448 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:47.073962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:47.074076 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:47.100530 1498704 cri.go:89] found id: ""
	I1217 02:06:47.100565 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.100574 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:47.100580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:47.100656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:47.126541 1498704 cri.go:89] found id: ""
	I1217 02:06:47.126573 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.126582 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:47.126589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:47.126657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:47.155783 1498704 cri.go:89] found id: ""
	I1217 02:06:47.155807 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.155816 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:47.155822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:47.155887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:47.183519 1498704 cri.go:89] found id: ""
	I1217 02:06:47.183547 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.183556 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:47.183562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:47.183640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:47.207004 1498704 cri.go:89] found id: ""
	I1217 02:06:47.207029 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.207038 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:47.207044 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:47.207107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:47.236132 1498704 cri.go:89] found id: ""
	I1217 02:06:47.236157 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.236166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:47.236173 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:47.236237 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:47.262428 1498704 cri.go:89] found id: ""
	I1217 02:06:47.262452 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.262460 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:47.262470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:47.262526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:47.291039 1498704 cri.go:89] found id: ""
	I1217 02:06:47.291113 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.291127 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:47.291137 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:47.291154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:47.348423 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:47.348457 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.362973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:47.363001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:47.446529 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:47.446602 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:47.446619 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:47.471848 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:47.471885 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:48.135178 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:50.635159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:50.002430 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:50.016670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:50.016759 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:50.048092 1498704 cri.go:89] found id: ""
	I1217 02:06:50.048116 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.048126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:50.048132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:50.048193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:50.077981 1498704 cri.go:89] found id: ""
	I1217 02:06:50.078006 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.078016 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:50.078023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:50.078084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:50.104799 1498704 cri.go:89] found id: ""
	I1217 02:06:50.104824 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.104833 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:50.104839 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:50.104899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:50.134987 1498704 cri.go:89] found id: ""
	I1217 02:06:50.135010 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.135019 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:50.135025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:50.135088 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:50.163663 1498704 cri.go:89] found id: ""
	I1217 02:06:50.163689 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.163698 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:50.163704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:50.163771 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:50.189331 1498704 cri.go:89] found id: ""
	I1217 02:06:50.189354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.189362 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:50.189369 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:50.189435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:50.214491 1498704 cri.go:89] found id: ""
	I1217 02:06:50.214516 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.214525 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:50.214531 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:50.214590 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:50.238415 1498704 cri.go:89] found id: ""
	I1217 02:06:50.238442 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.238451 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:50.238460 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:50.238472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.269776 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:50.269804 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:50.327018 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:50.327055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:50.341848 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:50.341876 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:50.424429 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:50.424452 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:50.424466 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:52.635229 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:54.635273 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:52.954006 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:52.964727 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:52.964802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:52.989789 1498704 cri.go:89] found id: ""
	I1217 02:06:52.989810 1498704 logs.go:282] 0 containers: []
	W1217 02:06:52.989819 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:52.989826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:52.989887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:53.015439 1498704 cri.go:89] found id: ""
	I1217 02:06:53.015467 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.015476 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:53.015482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:53.015592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:53.040841 1498704 cri.go:89] found id: ""
	I1217 02:06:53.040865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.040875 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:53.040882 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:53.040942 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:53.066349 1498704 cri.go:89] found id: ""
	I1217 02:06:53.066374 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.066383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:53.066389 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:53.066451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:53.091390 1498704 cri.go:89] found id: ""
	I1217 02:06:53.091415 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.091424 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:53.091430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:53.091490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:53.117556 1498704 cri.go:89] found id: ""
	I1217 02:06:53.117581 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.117590 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:53.117597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:53.117683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:53.142385 1498704 cri.go:89] found id: ""
	I1217 02:06:53.142411 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.142421 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:53.142428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:53.142487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:53.167326 1498704 cri.go:89] found id: ""
	I1217 02:06:53.167351 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.167360 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:53.167370 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:53.167410 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:53.169580 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:06:53.227048 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:53.227133 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:53.263335 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:53.263474 1498704 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:06:53.263485 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:53.263548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:53.331925 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:53.331956 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:53.331970 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:53.358423 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:53.358461 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:55.889770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:55.902670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:55.902755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:55.931695 1498704 cri.go:89] found id: ""
	I1217 02:06:55.931717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.931726 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:55.931732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:55.931792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:55.957876 1498704 cri.go:89] found id: ""
	I1217 02:06:55.957898 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.957906 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:55.957913 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:55.957971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:55.985470 1498704 cri.go:89] found id: ""
	I1217 02:06:55.985494 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.985503 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:55.985510 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:55.985569 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:56.012853 1498704 cri.go:89] found id: ""
	I1217 02:06:56.012876 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.012885 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:56.012892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:56.012953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:56.038869 1498704 cri.go:89] found id: ""
	I1217 02:06:56.038896 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.038906 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:56.038912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:56.038974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:56.063896 1498704 cri.go:89] found id: ""
	I1217 02:06:56.063922 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.063931 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:56.063938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:56.063998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:56.094167 1498704 cri.go:89] found id: ""
	I1217 02:06:56.094194 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.094202 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:56.094209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:56.094317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:56.119180 1498704 cri.go:89] found id: ""
	I1217 02:06:56.119203 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.119211 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:56.119220 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:56.119233 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:56.145717 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:56.145755 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:56.174733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:56.174764 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:56.231996 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:56.232031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:56.246270 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:56.246298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:56.310523 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:58.810773 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:58.820984 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:58.821052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:58.844690 1498704 cri.go:89] found id: ""
	I1217 02:06:58.844713 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.844723 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:58.844729 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:58.844789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:58.869040 1498704 cri.go:89] found id: ""
	I1217 02:06:58.869065 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.869074 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:58.869081 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:58.869141 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:58.897937 1498704 cri.go:89] found id: ""
	I1217 02:06:58.897965 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.897974 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:58.897981 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:58.898046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:58.936181 1498704 cri.go:89] found id: ""
	I1217 02:06:58.936206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.936216 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:58.936222 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:58.936284 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:58.961870 1498704 cri.go:89] found id: ""
	I1217 02:06:58.961894 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.961902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:58.961908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:58.961973 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:58.987453 1498704 cri.go:89] found id: ""
	I1217 02:06:58.987476 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.987485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:58.987492 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:58.987589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:59.014256 1498704 cri.go:89] found id: ""
	I1217 02:06:59.014281 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.014290 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:59.014296 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:59.014356 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:59.043181 1498704 cri.go:89] found id: ""
	I1217 02:06:59.043206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.043214 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:59.043224 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:59.043265 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:59.069988 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:59.070014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:59.126583 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:59.126616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:59.143769 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:59.143858 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:59.206336 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:59.206357 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:59.206368 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:59.467894 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:59.526704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.526801 1498704 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:00.964501 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:01.024877 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:01.024990 1498704 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:01.030055 1498704 out.go:179] * Enabled addons: 
	W1217 02:06:57.134604 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:59.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:01.032983 1498704 addons.go:530] duration metric: took 1m40.577449503s for enable addons: enabled=[]
	I1217 02:07:01.732628 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:01.743041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:01.743116 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:01.767462 1498704 cri.go:89] found id: ""
	I1217 02:07:01.767488 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.767497 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:01.767503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:01.767602 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:01.793082 1498704 cri.go:89] found id: ""
	I1217 02:07:01.793104 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.793112 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:01.793119 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:01.793179 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:01.819716 1498704 cri.go:89] found id: ""
	I1217 02:07:01.819740 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.819749 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:01.819755 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:01.819815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:01.847485 1498704 cri.go:89] found id: ""
	I1217 02:07:01.847556 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.847572 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:01.847580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:01.847641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:01.875985 1498704 cri.go:89] found id: ""
	I1217 02:07:01.876062 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.876084 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:01.876103 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:01.876193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:01.910714 1498704 cri.go:89] found id: ""
	I1217 02:07:01.910739 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.910748 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:01.910754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:01.910813 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:01.937846 1498704 cri.go:89] found id: ""
	I1217 02:07:01.937871 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.937880 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:01.937886 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:01.937945 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:01.964067 1498704 cri.go:89] found id: ""
	I1217 02:07:01.964091 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.964100 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:01.964114 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:01.964126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:02.028700 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:02.028724 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:02.028739 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:02.054141 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:02.054180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:02.082544 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:02.082570 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:02.139516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:02.139555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:07:01.635378 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:04.134753 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:06.135163 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:04.654404 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:04.665750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:04.665823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:04.692548 1498704 cri.go:89] found id: ""
	I1217 02:07:04.692573 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.692582 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:04.692589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:04.692649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:04.716945 1498704 cri.go:89] found id: ""
	I1217 02:07:04.716971 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.716980 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:04.716986 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:04.717050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:04.741853 1498704 cri.go:89] found id: ""
	I1217 02:07:04.741919 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.741943 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:04.741956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:04.742029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:04.766368 1498704 cri.go:89] found id: ""
	I1217 02:07:04.766432 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.766456 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:04.766471 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:04.766543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:04.791787 1498704 cri.go:89] found id: ""
	I1217 02:07:04.791811 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.791819 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:04.791826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:04.791886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:04.817229 1498704 cri.go:89] found id: ""
	I1217 02:07:04.817255 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.817264 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:04.817271 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:04.817343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:04.841915 1498704 cri.go:89] found id: ""
	I1217 02:07:04.841938 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.841947 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:04.841953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:04.842013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:04.866862 1498704 cri.go:89] found id: ""
	I1217 02:07:04.866889 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.866898 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:04.866908 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:04.866920 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:04.930507 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:04.930554 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.948025 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:04.948060 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:05.019651 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:05.019675 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:05.019688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:05.046001 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:05.046036 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.578495 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:07.591153 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:07.591225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:07.621427 1498704 cri.go:89] found id: ""
	I1217 02:07:07.621450 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.621459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:07.621466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:07.621526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:07.661892 1498704 cri.go:89] found id: ""
	I1217 02:07:07.661915 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.661923 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:07.661929 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:07.661995 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:07.695665 1498704 cri.go:89] found id: ""
	I1217 02:07:07.695693 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.695703 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:07.695709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:07.695775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:07.721278 1498704 cri.go:89] found id: ""
	I1217 02:07:07.721308 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.721316 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:07.721323 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:07.721381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:07.745368 1498704 cri.go:89] found id: ""
	I1217 02:07:07.745396 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.745404 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:07.745411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:07.745469 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:07.773994 1498704 cri.go:89] found id: ""
	I1217 02:07:07.774017 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.774025 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:07.774032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:07.774094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:07.799025 1498704 cri.go:89] found id: ""
	I1217 02:07:07.799049 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.799058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:07.799070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:07.799128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:07.824235 1498704 cri.go:89] found id: ""
	I1217 02:07:07.824261 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.824270 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:07.824278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:07.824290 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:07.839101 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:07.839129 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:08.135245 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:10.635146 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:07.923334 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:07.923360 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:07.923372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:07.949715 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:07.949754 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.977665 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:07.977690 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.537062 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:10.547797 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:10.547872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:10.572434 1498704 cri.go:89] found id: ""
	I1217 02:07:10.572462 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.572472 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:10.572479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:10.572560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:10.597486 1498704 cri.go:89] found id: ""
	I1217 02:07:10.597510 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.597519 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:10.597525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:10.597591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:10.627205 1498704 cri.go:89] found id: ""
	I1217 02:07:10.627227 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.627236 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:10.627241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:10.627316 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:10.661788 1498704 cri.go:89] found id: ""
	I1217 02:07:10.661815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.661825 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:10.661832 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:10.661892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:10.694378 1498704 cri.go:89] found id: ""
	I1217 02:07:10.694403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.694411 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:10.694417 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:10.694481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:10.719732 1498704 cri.go:89] found id: ""
	I1217 02:07:10.719759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.719768 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:10.719775 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:10.719834 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:10.746071 1498704 cri.go:89] found id: ""
	I1217 02:07:10.746141 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.746169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:10.746181 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:10.746257 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:10.771251 1498704 cri.go:89] found id: ""
	I1217 02:07:10.771324 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.771339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:10.771349 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:10.771363 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:10.797277 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:10.797316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:10.824227 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:10.824255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.883648 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:10.883685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:10.899500 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:10.899545 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:10.971848 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:13.135257 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:15.635347 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:13.472155 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:13.482654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:13.482730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:13.511840 1498704 cri.go:89] found id: ""
	I1217 02:07:13.511865 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.511874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:13.511880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:13.511938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:13.539314 1498704 cri.go:89] found id: ""
	I1217 02:07:13.539340 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.539349 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:13.539355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:13.539418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:13.564523 1498704 cri.go:89] found id: ""
	I1217 02:07:13.564595 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.564616 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:13.564635 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:13.564722 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:13.588672 1498704 cri.go:89] found id: ""
	I1217 02:07:13.588696 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.588705 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:13.588711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:13.588769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:13.613292 1498704 cri.go:89] found id: ""
	I1217 02:07:13.613370 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.613394 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:13.613413 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:13.613497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:13.640379 1498704 cri.go:89] found id: ""
	I1217 02:07:13.640401 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.640467 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:13.640475 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:13.640596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:13.670823 1498704 cri.go:89] found id: ""
	I1217 02:07:13.670897 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.670909 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:13.670915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:13.671033 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:13.697928 1498704 cri.go:89] found id: ""
	I1217 02:07:13.697954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.697963 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:13.697973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:13.697991 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:13.764081 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.764103 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:13.764117 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:13.789698 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:13.789735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:13.817458 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:13.817528 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:13.873570 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:13.873604 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.390490 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:16.400824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:16.400892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:16.433284 1498704 cri.go:89] found id: ""
	I1217 02:07:16.433306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.433315 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:16.433321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:16.433382 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:16.459029 1498704 cri.go:89] found id: ""
	I1217 02:07:16.459051 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.459059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:16.459065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:16.459123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:16.482532 1498704 cri.go:89] found id: ""
	I1217 02:07:16.482559 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.482568 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:16.482574 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:16.482635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:16.508099 1498704 cri.go:89] found id: ""
	I1217 02:07:16.508126 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.508135 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:16.508141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:16.508198 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:16.537293 1498704 cri.go:89] found id: ""
	I1217 02:07:16.537327 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.537336 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:16.537343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:16.537422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:16.561736 1498704 cri.go:89] found id: ""
	I1217 02:07:16.561761 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.561769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:16.561776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:16.561841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:16.588020 1498704 cri.go:89] found id: ""
	I1217 02:07:16.588054 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.588063 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:16.588069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:16.588136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:16.614951 1498704 cri.go:89] found id: ""
	I1217 02:07:16.614983 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.614993 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:16.615018 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:16.615035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:16.674706 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:16.674738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.693871 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:16.694008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:16.761779 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:16.761800 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:16.761813 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:16.788228 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:16.788270 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:18.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:20.135199 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:19.320399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:19.330773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:19.330845 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:19.354921 1498704 cri.go:89] found id: ""
	I1217 02:07:19.354990 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.355015 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:19.355028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:19.355100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:19.383572 1498704 cri.go:89] found id: ""
	I1217 02:07:19.383648 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.383662 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:19.383670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:19.383735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:19.412179 1498704 cri.go:89] found id: ""
	I1217 02:07:19.412204 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.412213 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:19.412229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:19.412290 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:19.437924 1498704 cri.go:89] found id: ""
	I1217 02:07:19.437950 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.437959 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:19.437966 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:19.438057 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:19.462416 1498704 cri.go:89] found id: ""
	I1217 02:07:19.462483 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.462507 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:19.462528 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:19.462618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:19.486955 1498704 cri.go:89] found id: ""
	I1217 02:07:19.487022 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.487047 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:19.487061 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:19.487133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:19.517143 1498704 cri.go:89] found id: ""
	I1217 02:07:19.517170 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.517178 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:19.517185 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:19.517245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:19.541419 1498704 cri.go:89] found id: ""
	I1217 02:07:19.541443 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.541452 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:19.541462 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:19.541474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:19.600586 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:19.600621 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:19.615645 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:19.615673 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:19.700496 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:19.700518 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:19.700531 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:19.725860 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:19.725896 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:22.254753 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:22.266831 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:22.266902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:22.291227 1498704 cri.go:89] found id: ""
	I1217 02:07:22.291306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.291329 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:22.291344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:22.291421 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:22.317812 1498704 cri.go:89] found id: ""
	I1217 02:07:22.317835 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.317844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:22.317850 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:22.317929 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:22.341950 1498704 cri.go:89] found id: ""
	I1217 02:07:22.341973 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.341982 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:22.341991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:22.342074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:22.368217 1498704 cri.go:89] found id: ""
	I1217 02:07:22.368291 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.368330 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:22.368350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:22.368435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:22.396888 1498704 cri.go:89] found id: ""
	I1217 02:07:22.396911 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.396920 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:22.396926 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:22.396987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:22.420964 1498704 cri.go:89] found id: ""
	I1217 02:07:22.421040 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.421064 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:22.421083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:22.421163 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:22.446890 1498704 cri.go:89] found id: ""
	I1217 02:07:22.446954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.446980 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:22.447002 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:22.447067 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:22.475922 1498704 cri.go:89] found id: ""
	I1217 02:07:22.475949 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.475959 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:22.475968 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:22.475980 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:22.532457 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:22.532490 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:22.546823 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:22.546900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:22.612059 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:22.612089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:22.612102 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:22.642268 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:22.642325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:22.635112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:25.134718 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:25.182933 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:25.194033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:25.194115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:25.218403 1498704 cri.go:89] found id: ""
	I1217 02:07:25.218426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.218434 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:25.218441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:25.218500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:25.247233 1498704 cri.go:89] found id: ""
	I1217 02:07:25.247257 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.247267 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:25.247272 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:25.247337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:25.271255 1498704 cri.go:89] found id: ""
	I1217 02:07:25.271278 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.271286 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:25.271292 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:25.271354 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:25.295129 1498704 cri.go:89] found id: ""
	I1217 02:07:25.295152 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.295161 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:25.295167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:25.295232 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:25.323735 1498704 cri.go:89] found id: ""
	I1217 02:07:25.323802 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.323818 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:25.323826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:25.323895 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:25.348083 1498704 cri.go:89] found id: ""
	I1217 02:07:25.348107 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.348116 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:25.348123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:25.348187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:25.375945 1498704 cri.go:89] found id: ""
	I1217 02:07:25.375967 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.375976 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:25.375982 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:25.376046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:25.404167 1498704 cri.go:89] found id: ""
	I1217 02:07:25.404190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.404199 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:25.404207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:25.404219 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.432830 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:25.432905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:25.491437 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:25.491472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:25.506773 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:25.506811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:25.571857 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:25.571879 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:25.571891 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:07:27.634506 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:29.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:28.097148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:28.109420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:28.109492 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:28.147274 1498704 cri.go:89] found id: ""
	I1217 02:07:28.147301 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.147310 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:28.147317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:28.147375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:28.182487 1498704 cri.go:89] found id: ""
	I1217 02:07:28.182520 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.182529 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:28.182535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:28.182605 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:28.210414 1498704 cri.go:89] found id: ""
	I1217 02:07:28.210492 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.210506 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:28.210513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:28.210596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:28.236032 1498704 cri.go:89] found id: ""
	I1217 02:07:28.236067 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.236076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:28.236100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:28.236187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:28.261848 1498704 cri.go:89] found id: ""
	I1217 02:07:28.261925 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.261949 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:28.261961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:28.262023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:28.287575 1498704 cri.go:89] found id: ""
	I1217 02:07:28.287642 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.287667 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:28.287681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:28.287753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:28.311909 1498704 cri.go:89] found id: ""
	I1217 02:07:28.311942 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.311950 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:28.311974 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:28.312055 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:28.338978 1498704 cri.go:89] found id: ""
	I1217 02:07:28.338999 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.339013 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:28.339041 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:28.339059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:28.395245 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:28.395283 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:28.410155 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:28.410183 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:28.473762 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:28.473783 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:28.473807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.499695 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:28.499728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.034443 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:31.045062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:31.045138 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:31.071798 1498704 cri.go:89] found id: ""
	I1217 02:07:31.071825 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.071835 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:31.071842 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:31.071912 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:31.102760 1498704 cri.go:89] found id: ""
	I1217 02:07:31.102787 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.102795 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:31.102802 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:31.102866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:31.141278 1498704 cri.go:89] found id: ""
	I1217 02:07:31.141303 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.141313 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:31.141320 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:31.141385 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:31.171560 1498704 cri.go:89] found id: ""
	I1217 02:07:31.171590 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.171599 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:31.171606 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:31.171671 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:31.198647 1498704 cri.go:89] found id: ""
	I1217 02:07:31.198713 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.198736 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:31.198749 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:31.198822 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:31.223451 1498704 cri.go:89] found id: ""
	I1217 02:07:31.223534 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.223560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:31.223580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:31.223660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:31.253387 1498704 cri.go:89] found id: ""
	I1217 02:07:31.253413 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.253422 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:31.253428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:31.253487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:31.278792 1498704 cri.go:89] found id: ""
	I1217 02:07:31.278815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.278823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:31.278832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:31.278843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:31.303758 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:31.303790 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.332180 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:31.332251 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:31.388186 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:31.388222 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:31.402632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:31.402661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:31.464007 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:32.134594 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:34.135393 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:33.964236 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:33.974724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:33.974801 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:33.997812 1498704 cri.go:89] found id: ""
	I1217 02:07:33.997833 1498704 logs.go:282] 0 containers: []
	W1217 02:07:33.997841 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:33.997847 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:33.997918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:34.028229 1498704 cri.go:89] found id: ""
	I1217 02:07:34.028256 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.028265 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:34.028273 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:34.028333 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:34.053400 1498704 cri.go:89] found id: ""
	I1217 02:07:34.053426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.053437 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:34.053444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:34.053504 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:34.079351 1498704 cri.go:89] found id: ""
	I1217 02:07:34.079419 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.079433 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:34.079441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:34.079499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:34.106192 1498704 cri.go:89] found id: ""
	I1217 02:07:34.106228 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.106237 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:34.106244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:34.106315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:34.147697 1498704 cri.go:89] found id: ""
	I1217 02:07:34.147759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.147785 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:34.147810 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:34.147890 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:34.176177 1498704 cri.go:89] found id: ""
	I1217 02:07:34.176244 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.176268 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:34.176288 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:34.176365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:34.205945 1498704 cri.go:89] found id: ""
	I1217 02:07:34.206007 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.206035 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:34.206056 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:34.206081 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:34.262276 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:34.262309 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:34.276944 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:34.276971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:34.338908 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:34.338934 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:34.338947 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:34.363617 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:34.363647 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:36.891296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:36.902860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:36.902927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:36.930707 1498704 cri.go:89] found id: ""
	I1217 02:07:36.930733 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.930747 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:36.930754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:36.930811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:36.955573 1498704 cri.go:89] found id: ""
	I1217 02:07:36.955597 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.955605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:36.955611 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:36.955668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:36.980409 1498704 cri.go:89] found id: ""
	I1217 02:07:36.980434 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.980444 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:36.980450 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:36.980508 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:37.009442 1498704 cri.go:89] found id: ""
	I1217 02:07:37.009467 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.009477 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:37.009484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:37.009551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:37.037149 1498704 cri.go:89] found id: ""
	I1217 02:07:37.037171 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.037180 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:37.037186 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:37.037250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:37.061767 1498704 cri.go:89] found id: ""
	I1217 02:07:37.061792 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.061801 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:37.061818 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:37.061889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:37.085968 1498704 cri.go:89] found id: ""
	I1217 02:07:37.085993 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.086003 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:37.086009 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:37.086074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:37.115273 1498704 cri.go:89] found id: ""
	I1217 02:07:37.115295 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.115303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:37.115312 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:37.115323 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:37.173190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:37.173223 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:37.190802 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:37.190834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:37.258464 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:37.258486 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:37.258498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:37.283631 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:37.283665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:36.635067 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:38.635141 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:40.635215 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:39.816914 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:39.827386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:39.827463 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:39.852104 1498704 cri.go:89] found id: ""
	I1217 02:07:39.852129 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.852139 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:39.852145 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:39.852204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:39.892785 1498704 cri.go:89] found id: ""
	I1217 02:07:39.892806 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.892815 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:39.892822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:39.892887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:39.923500 1498704 cri.go:89] found id: ""
	I1217 02:07:39.923530 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.923538 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:39.923544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:39.923603 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:39.949968 1498704 cri.go:89] found id: ""
	I1217 02:07:39.949995 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.950004 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:39.950010 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:39.950071 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:39.974479 1498704 cri.go:89] found id: ""
	I1217 02:07:39.974500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.974508 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:39.974515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:39.974572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:40.015259 1498704 cri.go:89] found id: ""
	I1217 02:07:40.015286 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.015296 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:40.015303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:40.015375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:40.045029 1498704 cri.go:89] found id: ""
	I1217 02:07:40.045055 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.045064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:40.045071 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:40.045135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:40.072784 1498704 cri.go:89] found id: ""
	I1217 02:07:40.072818 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.072833 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:40.072843 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:40.072860 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:40.153737 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:40.153765 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:40.153780 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:40.189498 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:40.189552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:40.222768 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:40.222844 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:40.279190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:40.279224 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:42.796231 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:42.806670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:42.806738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:42.830230 1498704 cri.go:89] found id: ""
	I1217 02:07:42.830250 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.830258 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:42.830265 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:42.830323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1217 02:07:43.135159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:45.135226 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:42.855478 1498704 cri.go:89] found id: ""
	I1217 02:07:42.855500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.855509 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:42.855515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:42.855580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:42.894494 1498704 cri.go:89] found id: ""
	I1217 02:07:42.894522 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.894530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:42.894536 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:42.894593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:42.921324 1498704 cri.go:89] found id: ""
	I1217 02:07:42.921350 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.921359 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:42.921365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:42.921435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:42.953266 1498704 cri.go:89] found id: ""
	I1217 02:07:42.953290 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.953299 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:42.953305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:42.953366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:42.977816 1498704 cri.go:89] found id: ""
	I1217 02:07:42.977841 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.977850 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:42.977856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:42.977917 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:43.003747 1498704 cri.go:89] found id: ""
	I1217 02:07:43.003839 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.003865 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:43.003880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:43.003963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:43.029772 1498704 cri.go:89] found id: ""
	I1217 02:07:43.029797 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.029806 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:43.029816 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:43.029828 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:43.055443 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:43.055476 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:43.084076 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:43.084104 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:43.145546 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:43.145607 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:43.161920 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:43.161999 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:43.231831 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:45.733506 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:45.744340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:45.744408 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:45.769934 1498704 cri.go:89] found id: ""
	I1217 02:07:45.769957 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.769965 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:45.769971 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:45.770034 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:45.795238 1498704 cri.go:89] found id: ""
	I1217 02:07:45.795263 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.795272 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:45.795279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:45.795343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:45.821898 1498704 cri.go:89] found id: ""
	I1217 02:07:45.821922 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.821930 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:45.821937 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:45.821999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:45.847109 1498704 cri.go:89] found id: ""
	I1217 02:07:45.847132 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.847140 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:45.847146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:45.847208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:45.880160 1498704 cri.go:89] found id: ""
	I1217 02:07:45.880190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.880199 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:45.880205 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:45.880271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:45.910818 1498704 cri.go:89] found id: ""
	I1217 02:07:45.910850 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.910859 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:45.910866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:45.910927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:45.939378 1498704 cri.go:89] found id: ""
	I1217 02:07:45.939403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.939413 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:45.939419 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:45.939480 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:45.966395 1498704 cri.go:89] found id: ""
	I1217 02:07:45.966421 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.966430 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:45.966440 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:45.966479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:45.981177 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:45.981203 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:46.055154 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:46.055186 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:46.055204 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:46.081781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:46.081822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:46.110247 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:46.110271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:07:47.635175 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:50.134634 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:48.673749 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:48.684117 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:48.684190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:48.710141 1498704 cri.go:89] found id: ""
	I1217 02:07:48.710163 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.710171 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:48.710177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:48.710242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:48.735609 1498704 cri.go:89] found id: ""
	I1217 02:07:48.735631 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.735639 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:48.735648 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:48.735707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:48.760494 1498704 cri.go:89] found id: ""
	I1217 02:07:48.760517 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.760525 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:48.760532 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:48.760592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:48.786553 1498704 cri.go:89] found id: ""
	I1217 02:07:48.786574 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.786582 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:48.786588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:48.786645 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:48.815529 1498704 cri.go:89] found id: ""
	I1217 02:07:48.815551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.815560 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:48.815566 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:48.815623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:48.839528 1498704 cri.go:89] found id: ""
	I1217 02:07:48.839551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.839560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:48.839567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:48.839649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:48.870240 1498704 cri.go:89] found id: ""
	I1217 02:07:48.870266 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.870275 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:48.870282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:48.870363 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:48.906712 1498704 cri.go:89] found id: ""
	I1217 02:07:48.906736 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.906746 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:48.906756 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:48.906786 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:48.934786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:48.934865 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:48.964758 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:48.964785 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:49.022291 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:49.022326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:49.036990 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:49.037025 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:49.101921 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.602715 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.614088 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:51.614167 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:51.640614 1498704 cri.go:89] found id: ""
	I1217 02:07:51.640639 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.640648 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:51.640655 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:51.640716 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:51.665595 1498704 cri.go:89] found id: ""
	I1217 02:07:51.665622 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.665631 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:51.665637 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:51.665727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:51.690508 1498704 cri.go:89] found id: ""
	I1217 02:07:51.690532 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.690541 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:51.690547 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:51.690627 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:51.717537 1498704 cri.go:89] found id: ""
	I1217 02:07:51.717561 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.717570 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:51.717577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:51.717638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:51.742073 1498704 cri.go:89] found id: ""
	I1217 02:07:51.742095 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.742104 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:51.742110 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:51.742169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:51.768165 1498704 cri.go:89] found id: ""
	I1217 02:07:51.768188 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.768234 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:51.768255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:51.768322 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:51.793095 1498704 cri.go:89] found id: ""
	I1217 02:07:51.793118 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.793127 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:51.793133 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:51.793195 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:51.817679 1498704 cri.go:89] found id: ""
	I1217 02:07:51.817701 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.817710 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:51.817720 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:51.817730 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:51.874453 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:51.874486 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:51.890393 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:51.890418 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:51.966182 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.966201 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:51.966214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:51.992382 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:51.992417 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:52.135139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:54.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:54.525060 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:54.535685 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:54.535760 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:54.563912 1498704 cri.go:89] found id: ""
	I1217 02:07:54.563935 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.563944 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:54.563950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:54.564011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:54.588995 1498704 cri.go:89] found id: ""
	I1217 02:07:54.589020 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.589031 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:54.589038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:54.589101 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:54.615173 1498704 cri.go:89] found id: ""
	I1217 02:07:54.615198 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.615207 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:54.615214 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:54.615277 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:54.640498 1498704 cri.go:89] found id: ""
	I1217 02:07:54.640523 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.640532 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:54.640539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:54.640623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:54.666201 1498704 cri.go:89] found id: ""
	I1217 02:07:54.666226 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.666234 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:54.666241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:54.666303 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:54.690876 1498704 cri.go:89] found id: ""
	I1217 02:07:54.690899 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.690908 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:54.690915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:54.690974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:54.714932 1498704 cri.go:89] found id: ""
	I1217 02:07:54.715000 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.715024 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:54.715043 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:54.715133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:54.739880 1498704 cri.go:89] found id: ""
	I1217 02:07:54.739906 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.739926 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:54.739952 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:54.739978 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:54.804035 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:54.804056 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:54.804070 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:54.829994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:54.830030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.858611 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:54.858639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:54.921120 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:54.921196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.438546 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.448669 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:57.448736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:57.475324 1498704 cri.go:89] found id: ""
	I1217 02:07:57.475346 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.475355 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:57.475362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:57.475419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:57.505098 1498704 cri.go:89] found id: ""
	I1217 02:07:57.505123 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.505131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:57.505137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:57.505196 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:57.529496 1498704 cri.go:89] found id: ""
	I1217 02:07:57.529519 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.529529 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:57.529535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:57.529601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:57.560154 1498704 cri.go:89] found id: ""
	I1217 02:07:57.560179 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.560188 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:57.560194 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:57.560256 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:57.584872 1498704 cri.go:89] found id: ""
	I1217 02:07:57.584898 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.584912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:57.584919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:57.584976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:57.611897 1498704 cri.go:89] found id: ""
	I1217 02:07:57.611930 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.611938 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:57.611945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:57.612004 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:57.636969 1498704 cri.go:89] found id: ""
	I1217 02:07:57.636991 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.636999 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:57.637006 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:57.637069 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:57.661285 1498704 cri.go:89] found id: ""
	I1217 02:07:57.661312 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.661320 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:57.661329 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:57.661340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:57.717030 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:57.717066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.732556 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:57.732588 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:57.802383 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:57.802403 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:57.802414 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:57.831640 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:57.831729 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:56.634914 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:58.635189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:01.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:00.359786 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.375104 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:00.375194 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:00.418191 1498704 cri.go:89] found id: ""
	I1217 02:08:00.418222 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.418232 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:00.418239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:00.418315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:00.456739 1498704 cri.go:89] found id: ""
	I1217 02:08:00.456766 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.456775 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:00.456782 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:00.456850 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:00.488069 1498704 cri.go:89] found id: ""
	I1217 02:08:00.488097 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.488106 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:00.488115 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:00.488180 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:00.522338 1498704 cri.go:89] found id: ""
	I1217 02:08:00.522369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.522383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:00.522391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:00.522477 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:00.552999 1498704 cri.go:89] found id: ""
	I1217 02:08:00.553026 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.553035 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:00.553041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:00.553105 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:00.579678 1498704 cri.go:89] found id: ""
	I1217 02:08:00.579710 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.579719 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:00.579725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:00.579787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:00.605680 1498704 cri.go:89] found id: ""
	I1217 02:08:00.605708 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.605717 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:00.605724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:00.605787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:00.632147 1498704 cri.go:89] found id: ""
	I1217 02:08:00.632172 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.632181 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:00.632191 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:00.632202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:00.658405 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:00.658442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.687017 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:00.687042 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:00.743960 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:00.743997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:00.758928 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:00.758957 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:00.826075 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:08:03.634990 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:05.635168 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:03.326352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.337106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:03.337176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:03.362079 1498704 cri.go:89] found id: ""
	I1217 02:08:03.362103 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.362112 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:03.362120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:03.362185 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:03.406055 1498704 cri.go:89] found id: ""
	I1217 02:08:03.406078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.406086 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:03.406092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:03.406153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:03.469689 1498704 cri.go:89] found id: ""
	I1217 02:08:03.469719 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.469728 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:03.469734 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:03.469795 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:03.495363 1498704 cri.go:89] found id: ""
	I1217 02:08:03.495388 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.495397 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:03.495403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:03.495462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:03.520987 1498704 cri.go:89] found id: ""
	I1217 02:08:03.521020 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.521029 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:03.521035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:03.521104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:03.546993 1498704 cri.go:89] found id: ""
	I1217 02:08:03.547070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.547086 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:03.547094 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:03.547157 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:03.572356 1498704 cri.go:89] found id: ""
	I1217 02:08:03.572381 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.572390 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:03.572396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:03.572465 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:03.601007 1498704 cri.go:89] found id: ""
	I1217 02:08:03.601039 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.601048 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:03.601058 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:03.601069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:03.626163 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:03.626198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:03.653854 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:03.653882 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:03.711530 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:03.711566 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:03.726308 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:03.726377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:03.794467 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.296166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.306860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:06.306931 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:06.335081 1498704 cri.go:89] found id: ""
	I1217 02:08:06.335118 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.335128 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:06.335140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:06.335216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:06.360315 1498704 cri.go:89] found id: ""
	I1217 02:08:06.360337 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.360346 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:06.360353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:06.360416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:06.438162 1498704 cri.go:89] found id: ""
	I1217 02:08:06.438184 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.438193 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:06.438201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:06.438260 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:06.473712 1498704 cri.go:89] found id: ""
	I1217 02:08:06.473739 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.473750 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:06.473757 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:06.473821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:06.501185 1498704 cri.go:89] found id: ""
	I1217 02:08:06.501213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.501223 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:06.501229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:06.501291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:06.527618 1498704 cri.go:89] found id: ""
	I1217 02:08:06.527642 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.527650 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:06.527657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:06.527723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:06.551855 1498704 cri.go:89] found id: ""
	I1217 02:08:06.551882 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.551892 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:06.551899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:06.551982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:06.577516 1498704 cri.go:89] found id: ""
	I1217 02:08:06.577547 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.577556 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:06.577566 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:06.577577 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:06.592728 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:06.592762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:06.660537 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.660559 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:06.660572 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:06.685272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:06.685307 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:06.716733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:06.716761 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:07.635213 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:10.134640 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:09.274376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.285055 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:09.285129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:09.310445 1498704 cri.go:89] found id: ""
	I1217 02:08:09.310468 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.310477 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:09.310483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:09.310551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:09.339399 1498704 cri.go:89] found id: ""
	I1217 02:08:09.339434 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.339443 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:09.339449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:09.339539 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:09.364792 1498704 cri.go:89] found id: ""
	I1217 02:08:09.364830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.364843 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:09.364851 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:09.364921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:09.398786 1498704 cri.go:89] found id: ""
	I1217 02:08:09.398813 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.398822 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:09.398829 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:09.398898 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:09.437605 1498704 cri.go:89] found id: ""
	I1217 02:08:09.437633 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.437670 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:09.437696 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:09.437778 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:09.469389 1498704 cri.go:89] found id: ""
	I1217 02:08:09.469430 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.469439 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:09.469446 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:09.469557 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:09.501822 1498704 cri.go:89] found id: ""
	I1217 02:08:09.501847 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.501856 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:09.501873 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:09.501953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:09.526536 1498704 cri.go:89] found id: ""
	I1217 02:08:09.526604 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.526627 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:09.526649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:09.526685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:09.553800 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:09.553829 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.611333 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:09.611367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:09.626057 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:09.626083 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:09.690274 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:09.690296 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:09.690308 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.216656 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.226983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:12.227094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:12.251590 1498704 cri.go:89] found id: ""
	I1217 02:08:12.251613 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.251622 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:12.251628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:12.251686 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:12.276257 1498704 cri.go:89] found id: ""
	I1217 02:08:12.276285 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.276293 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:12.276308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:12.276365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:12.300603 1498704 cri.go:89] found id: ""
	I1217 02:08:12.300628 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.300637 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:12.300643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:12.300704 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:12.328528 1498704 cri.go:89] found id: ""
	I1217 02:08:12.328552 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.328561 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:12.328571 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:12.328629 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:12.353931 1498704 cri.go:89] found id: ""
	I1217 02:08:12.353954 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.353963 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:12.353969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:12.354031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:12.426173 1498704 cri.go:89] found id: ""
	I1217 02:08:12.426238 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.426263 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:12.426283 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:12.426375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:12.463406 1498704 cri.go:89] found id: ""
	I1217 02:08:12.463432 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.463441 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:12.463447 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:12.463511 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:12.491432 1498704 cri.go:89] found id: ""
	I1217 02:08:12.491457 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.491466 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:12.491476 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:12.491487 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:12.549942 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:12.549979 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:12.566124 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:12.566160 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:12.632809 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:12.632878 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:12.632899 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.657969 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:12.658007 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:12.635367 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:14.635409 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:15.189789 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.200614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:15.200684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:15.224844 1498704 cri.go:89] found id: ""
	I1217 02:08:15.224865 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.224874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:15.224880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:15.224939 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:15.253351 1498704 cri.go:89] found id: ""
	I1217 02:08:15.253417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.253441 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:15.253459 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:15.253547 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:15.278140 1498704 cri.go:89] found id: ""
	I1217 02:08:15.278216 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.278238 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:15.278257 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:15.278335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:15.303296 1498704 cri.go:89] found id: ""
	I1217 02:08:15.303325 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.303334 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:15.303340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:15.303399 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:15.332342 1498704 cri.go:89] found id: ""
	I1217 02:08:15.332369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.332379 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:15.332386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:15.332442 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:15.361393 1498704 cri.go:89] found id: ""
	I1217 02:08:15.361417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.361426 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:15.361432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:15.361501 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:15.399309 1498704 cri.go:89] found id: ""
	I1217 02:08:15.399335 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.399343 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:15.399350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:15.399409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:15.441743 1498704 cri.go:89] found id: ""
	I1217 02:08:15.441769 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.441778 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:15.441787 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:15.441799 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:15.508941 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:15.508977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:15.524099 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:15.524127 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:15.595333 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:15.595351 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:15.595367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:15.620921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:15.620958 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:17.135481 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:19.635228 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:18.151199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.162135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:18.162207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:18.190085 1498704 cri.go:89] found id: ""
	I1217 02:08:18.190108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.190116 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:18.190123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:18.190186 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:18.218906 1498704 cri.go:89] found id: ""
	I1217 02:08:18.218930 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.218938 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:18.218944 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:18.219002 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:18.242454 1498704 cri.go:89] found id: ""
	I1217 02:08:18.242476 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.242484 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:18.242490 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:18.242549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:18.267483 1498704 cri.go:89] found id: ""
	I1217 02:08:18.267505 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.267514 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:18.267527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:18.267587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:18.291870 1498704 cri.go:89] found id: ""
	I1217 02:08:18.291894 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.291902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:18.291909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:18.291970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:18.315514 1498704 cri.go:89] found id: ""
	I1217 02:08:18.315543 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.315551 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:18.315558 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:18.315617 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:18.338958 1498704 cri.go:89] found id: ""
	I1217 02:08:18.338980 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.338988 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:18.338995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:18.339052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:18.362300 1498704 cri.go:89] found id: ""
	I1217 02:08:18.362326 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.362339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:18.362349 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:18.362361 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:18.441796 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:18.441881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:18.465294 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:18.465318 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:18.527976 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:18.527999 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:18.528012 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:18.552941 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:18.552971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:21.080554 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.090872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:21.090951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:21.119427 1498704 cri.go:89] found id: ""
	I1217 02:08:21.119451 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.119459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:21.119466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:21.119531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:21.145488 1498704 cri.go:89] found id: ""
	I1217 02:08:21.145509 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.145517 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:21.145524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:21.145589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:21.171795 1498704 cri.go:89] found id: ""
	I1217 02:08:21.171822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.171830 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:21.171837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:21.171897 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:21.200041 1498704 cri.go:89] found id: ""
	I1217 02:08:21.200067 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.200076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:21.200083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:21.200144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:21.224266 1498704 cri.go:89] found id: ""
	I1217 02:08:21.224294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.224302 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:21.224310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:21.224374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:21.249832 1498704 cri.go:89] found id: ""
	I1217 02:08:21.249859 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.249868 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:21.249875 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:21.249934 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:21.276533 1498704 cri.go:89] found id: ""
	I1217 02:08:21.276556 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.276565 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:21.276577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:21.276638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:21.302869 1498704 cri.go:89] found id: ""
	I1217 02:08:21.302898 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.302906 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:21.302920 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:21.302932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:21.359571 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:21.359612 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:21.386971 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:21.387000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:21.481485 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:21.481511 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:21.481523 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:21.510229 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:21.510266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:22.134985 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:24.135180 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:26.135497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:24.042457 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.053742 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:24.053815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:24.079751 1498704 cri.go:89] found id: ""
	I1217 02:08:24.079777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.079793 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:24.079801 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:24.079863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:24.106268 1498704 cri.go:89] found id: ""
	I1217 02:08:24.106294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.106304 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:24.106310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:24.106372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:24.136105 1498704 cri.go:89] found id: ""
	I1217 02:08:24.136127 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.136141 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:24.136147 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:24.136208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:24.162676 1498704 cri.go:89] found id: ""
	I1217 02:08:24.162704 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.162713 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:24.162719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:24.162781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:24.186881 1498704 cri.go:89] found id: ""
	I1217 02:08:24.186909 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.186918 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:24.186924 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:24.186983 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:24.211784 1498704 cri.go:89] found id: ""
	I1217 02:08:24.211807 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.211816 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:24.211823 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:24.211883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:24.239768 1498704 cri.go:89] found id: ""
	I1217 02:08:24.239791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.239799 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:24.239806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:24.239863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:24.267746 1498704 cri.go:89] found id: ""
	I1217 02:08:24.267826 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.267843 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:24.267853 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:24.267864 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:24.292626 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:24.292661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.324726 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:24.324756 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:24.386142 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:24.386184 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:24.417577 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:24.417605 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:24.496974 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:26.997267 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.015470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:27.015561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:27.041572 1498704 cri.go:89] found id: ""
	I1217 02:08:27.041593 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.041601 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:27.041608 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:27.041697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:27.067860 1498704 cri.go:89] found id: ""
	I1217 02:08:27.067884 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.067902 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:27.067923 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:27.068020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:27.091698 1498704 cri.go:89] found id: ""
	I1217 02:08:27.091722 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.091737 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:27.091744 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:27.091804 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:27.116923 1498704 cri.go:89] found id: ""
	I1217 02:08:27.116946 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.116954 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:27.116961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:27.117020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:27.142595 1498704 cri.go:89] found id: ""
	I1217 02:08:27.142619 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.142628 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:27.142634 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:27.142693 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:27.167169 1498704 cri.go:89] found id: ""
	I1217 02:08:27.167195 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.167204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:27.167211 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:27.167271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:27.191350 1498704 cri.go:89] found id: ""
	I1217 02:08:27.191376 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.191384 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:27.191391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:27.191451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:27.216388 1498704 cri.go:89] found id: ""
	I1217 02:08:27.216413 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.216422 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:27.216431 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:27.216442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:27.279861 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:27.279884 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:27.279900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:27.304990 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:27.305027 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:27.333926 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:27.333952 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:27.396365 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:27.396403 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:08:28.635158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:30.635316 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:29.913629 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.924284 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:29.924359 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:29.951846 1498704 cri.go:89] found id: ""
	I1217 02:08:29.951873 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.951882 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:29.951888 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:29.951948 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:29.979680 1498704 cri.go:89] found id: ""
	I1217 02:08:29.979709 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.979718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:29.979724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:29.979783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:30.017361 1498704 cri.go:89] found id: ""
	I1217 02:08:30.017494 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.017508 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:30.017517 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:30.017600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:30.055966 1498704 cri.go:89] found id: ""
	I1217 02:08:30.055994 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.056008 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:30.056015 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:30.056153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:30.086268 1498704 cri.go:89] found id: ""
	I1217 02:08:30.086296 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.086305 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:30.086313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:30.086387 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:30.114436 1498704 cri.go:89] found id: ""
	I1217 02:08:30.114474 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.114485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:30.114493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:30.114563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:30.143104 1498704 cri.go:89] found id: ""
	I1217 02:08:30.143130 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.143140 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:30.143148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:30.143215 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:30.178848 1498704 cri.go:89] found id: ""
	I1217 02:08:30.178912 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.178928 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:30.178939 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:30.178950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:30.235226 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:30.235261 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:30.250400 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:30.250427 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:30.316823 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:30.316843 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:30.316855 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:30.341943 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:30.341985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:33.135099 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:35.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:32.880177 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.891005 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:32.891073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:32.918870 1498704 cri.go:89] found id: ""
	I1217 02:08:32.918896 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.918905 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:32.918912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:32.918970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:32.944098 1498704 cri.go:89] found id: ""
	I1217 02:08:32.944123 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.944132 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:32.944137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:32.944197 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:32.968767 1498704 cri.go:89] found id: ""
	I1217 02:08:32.968791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.968801 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:32.968806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:32.968864 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:32.992596 1498704 cri.go:89] found id: ""
	I1217 02:08:32.992624 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.992632 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:32.992638 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:32.992702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:33.018400 1498704 cri.go:89] found id: ""
	I1217 02:08:33.018424 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.018433 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:33.018439 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:33.018497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:33.043622 1498704 cri.go:89] found id: ""
	I1217 02:08:33.043650 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.043660 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:33.043666 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:33.043728 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:33.068595 1498704 cri.go:89] found id: ""
	I1217 02:08:33.068617 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.068627 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:33.068633 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:33.068695 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:33.097084 1498704 cri.go:89] found id: ""
	I1217 02:08:33.097108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.097117 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:33.097126 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:33.097137 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:33.122964 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:33.123001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:33.151132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:33.151159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:33.206768 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:33.206805 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:33.221251 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:33.221330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:33.289516 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:35.789806 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.800262 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:35.800330 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:35.824823 1498704 cri.go:89] found id: ""
	I1217 02:08:35.824844 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.824852 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:35.824859 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:35.824916 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:35.849352 1498704 cri.go:89] found id: ""
	I1217 02:08:35.849379 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.849388 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:35.849395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:35.849455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:35.873025 1498704 cri.go:89] found id: ""
	I1217 02:08:35.873045 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.873054 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:35.873060 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:35.873123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:35.897548 1498704 cri.go:89] found id: ""
	I1217 02:08:35.897572 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.897581 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:35.897586 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:35.897660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:35.927220 1498704 cri.go:89] found id: ""
	I1217 02:08:35.927283 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.927301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:35.927309 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:35.927374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:35.955050 1498704 cri.go:89] found id: ""
	I1217 02:08:35.955075 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.955083 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:35.955089 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:35.955168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:35.979074 1498704 cri.go:89] found id: ""
	I1217 02:08:35.979144 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.979160 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:35.979167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:35.979228 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:36.005502 1498704 cri.go:89] found id: ""
	I1217 02:08:36.005529 1498704 logs.go:282] 0 containers: []
	W1217 02:08:36.005557 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:36.005568 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:36.005582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:36.022508 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:36.022536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:36.088117 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:36.088139 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:36.088152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:36.112883 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:36.112917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:36.142584 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:36.142610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:37.635249 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:40.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:38.698261 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:38.709807 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:38.709880 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:38.734678 1498704 cri.go:89] found id: ""
	I1217 02:08:38.734703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.734712 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:38.734718 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:38.734777 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:38.764118 1498704 cri.go:89] found id: ""
	I1217 02:08:38.764145 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.764154 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:38.764161 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:38.764223 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:38.792269 1498704 cri.go:89] found id: ""
	I1217 02:08:38.792295 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.792305 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:38.792311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:38.792371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:38.817823 1498704 cri.go:89] found id: ""
	I1217 02:08:38.817845 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.817854 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:38.817861 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:38.817921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:38.846444 1498704 cri.go:89] found id: ""
	I1217 02:08:38.846469 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.846478 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:38.846484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:38.846575 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:38.870805 1498704 cri.go:89] found id: ""
	I1217 02:08:38.870830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.870839 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:38.870845 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:38.870909 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:38.902022 1498704 cri.go:89] found id: ""
	I1217 02:08:38.902047 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.902056 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:38.902063 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:38.902127 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:38.925802 1498704 cri.go:89] found id: ""
	I1217 02:08:38.925831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.925851 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:38.925860 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:38.925871 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.991113 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:38.991154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:39.006019 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:39.006049 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:39.074269 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:39.074328 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:39.074342 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:39.099793 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:39.099827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:41.629026 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.643330 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:41.643411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:41.702722 1498704 cri.go:89] found id: ""
	I1217 02:08:41.702743 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.702752 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:41.702758 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:41.702817 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:41.727343 1498704 cri.go:89] found id: ""
	I1217 02:08:41.727368 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.727377 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:41.727383 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:41.727443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:41.752306 1498704 cri.go:89] found id: ""
	I1217 02:08:41.752331 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.752340 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:41.752346 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:41.752409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:41.777003 1498704 cri.go:89] found id: ""
	I1217 02:08:41.777078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.777101 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:41.777121 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:41.777225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:41.801272 1498704 cri.go:89] found id: ""
	I1217 02:08:41.801298 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.801306 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:41.801313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:41.801371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:41.827046 1498704 cri.go:89] found id: ""
	I1217 02:08:41.827070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.827078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:41.827085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:41.827142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:41.855924 1498704 cri.go:89] found id: ""
	I1217 02:08:41.855956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.855965 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:41.855972 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:41.856042 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:41.882797 1498704 cri.go:89] found id: ""
	I1217 02:08:41.882821 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.882830 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:41.882840 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:41.882856 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:41.897281 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:41.897316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:41.963310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:41.963333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:41.963344 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:41.988494 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:41.988529 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:42.019738 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:42.019770 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:42.135661 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:44.635135 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:44.578521 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.589302 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:44.589376 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:44.614651 1498704 cri.go:89] found id: ""
	I1217 02:08:44.614676 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.614685 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:44.614692 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:44.614755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:44.666392 1498704 cri.go:89] found id: ""
	I1217 02:08:44.666414 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.666422 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:44.666429 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:44.666487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:44.722566 1498704 cri.go:89] found id: ""
	I1217 02:08:44.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.722599 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:44.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:44.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:44.747631 1498704 cri.go:89] found id: ""
	I1217 02:08:44.747656 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.747665 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:44.747671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:44.747730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:44.775719 1498704 cri.go:89] found id: ""
	I1217 02:08:44.775756 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.775765 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:44.775773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:44.775846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:44.801032 1498704 cri.go:89] found id: ""
	I1217 02:08:44.801056 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.801066 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:44.801072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:44.801131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:44.827838 1498704 cri.go:89] found id: ""
	I1217 02:08:44.827872 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.827883 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:44.827890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:44.827961 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:44.852948 1498704 cri.go:89] found id: ""
	I1217 02:08:44.852981 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.852990 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:44.853000 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:44.853011 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.908280 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:44.908314 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:44.923445 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:44.923538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:44.992600 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:44.992624 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:44.992637 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:45.027924 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:45.027975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:47.587759 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.598591 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:47.598664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:47.660378 1498704 cri.go:89] found id: ""
	I1217 02:08:47.660400 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.660408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:47.660414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:47.660472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:47.708467 1498704 cri.go:89] found id: ""
	I1217 02:08:47.708489 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.708498 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:47.708504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:47.708563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:47.733161 1498704 cri.go:89] found id: ""
	I1217 02:08:47.733183 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.733191 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:47.733198 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:47.733264 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:47.759190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.759213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.759222 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:47.759228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:47.759285 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:47.787579 1498704 cri.go:89] found id: ""
	I1217 02:08:47.787601 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.787610 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:47.787616 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:47.787697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:47.816190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.816215 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.816224 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:47.816231 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:47.816323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:47.843534 1498704 cri.go:89] found id: ""
	I1217 02:08:47.843562 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.843572 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:47.843578 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:47.843643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1217 02:08:47.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:49.634635 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:47.867806 1498704 cri.go:89] found id: ""
	I1217 02:08:47.867831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.867841 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:47.867852 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:47.867870 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:47.926619 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:47.926658 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:47.941706 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:47.941734 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:48.009461 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:48.009539 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:48.009561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:48.035273 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:48.035311 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.567421 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.578623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:50.578694 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:50.607374 1498704 cri.go:89] found id: ""
	I1217 02:08:50.607396 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.607405 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.607411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:50.607472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:50.666455 1498704 cri.go:89] found id: ""
	I1217 02:08:50.666484 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.666493 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.666499 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:50.666559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:50.717784 1498704 cri.go:89] found id: ""
	I1217 02:08:50.717822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.717831 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.717838 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:50.717941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:50.748500 1498704 cri.go:89] found id: ""
	I1217 02:08:50.748531 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.748543 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.748550 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:50.748618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:50.774642 1498704 cri.go:89] found id: ""
	I1217 02:08:50.774668 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.774677 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.774683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:50.774742 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:50.803738 1498704 cri.go:89] found id: ""
	I1217 02:08:50.803760 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.803769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.803776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:50.803840 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:50.828145 1498704 cri.go:89] found id: ""
	I1217 02:08:50.828212 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.828238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.828256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:50.828335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:50.853950 1498704 cri.go:89] found id: ""
	I1217 02:08:50.853976 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.853985 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.853995 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.854006 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.910278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.910316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.924980 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.925008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.992234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.992257 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:50.992271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:51.018744 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:51.018778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:52.134591 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:54.134633 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:53.547953 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.558518 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:53.558593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:53.583100 1498704 cri.go:89] found id: ""
	I1217 02:08:53.583125 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.583134 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.583141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:53.583202 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:53.607925 1498704 cri.go:89] found id: ""
	I1217 02:08:53.607948 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.607956 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.607962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:53.608023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:53.657081 1498704 cri.go:89] found id: ""
	I1217 02:08:53.657104 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.657127 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.657135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:53.657208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:53.704278 1498704 cri.go:89] found id: ""
	I1217 02:08:53.704305 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.704313 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.704321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:53.704381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:53.730823 1498704 cri.go:89] found id: ""
	I1217 02:08:53.730851 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.730860 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.730868 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:53.730928 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:53.757094 1498704 cri.go:89] found id: ""
	I1217 02:08:53.757116 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.757125 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.757132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:53.757192 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:53.786671 1498704 cri.go:89] found id: ""
	I1217 02:08:53.786696 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.786705 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.786711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:53.786768 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:53.810935 1498704 cri.go:89] found id: ""
	I1217 02:08:53.810957 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.810966 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.810975 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.810986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.866107 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.866140 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:53.881003 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.881037 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.945396 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.945419 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:53.945432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:53.973428 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.973469 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:56.504673 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.515738 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:56.515816 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:56.540741 1498704 cri.go:89] found id: ""
	I1217 02:08:56.540765 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.540773 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.540780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:56.540846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:56.565810 1498704 cri.go:89] found id: ""
	I1217 02:08:56.565831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.565840 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.565846 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:56.565907 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:56.596074 1498704 cri.go:89] found id: ""
	I1217 02:08:56.596096 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.596105 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.596112 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:56.596173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:56.636207 1498704 cri.go:89] found id: ""
	I1217 02:08:56.636229 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.636238 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.636244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:56.636304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:56.698720 1498704 cri.go:89] found id: ""
	I1217 02:08:56.698749 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.698758 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.698765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:56.698838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:56.732897 1498704 cri.go:89] found id: ""
	I1217 02:08:56.732918 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.732926 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.732933 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:56.732999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:56.762677 1498704 cri.go:89] found id: ""
	I1217 02:08:56.762703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.762712 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.762719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:56.762779 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:56.788307 1498704 cri.go:89] found id: ""
	I1217 02:08:56.788333 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.788342 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.788352 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.788364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.844513 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.844548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.858936 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.858968 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.925270 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.925293 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:56.925305 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:56.951928 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.951967 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:56.634544 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:58.634782 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:01.135356 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:59.483487 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.494825 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:59.494899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:59.520751 1498704 cri.go:89] found id: ""
	I1217 02:08:59.520777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.520785 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.520792 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:59.520851 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:59.546097 1498704 cri.go:89] found id: ""
	I1217 02:08:59.546122 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.546131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.546138 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:59.546205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:59.571525 1498704 cri.go:89] found id: ""
	I1217 02:08:59.571548 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.571556 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.571562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:59.571635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:59.595916 1498704 cri.go:89] found id: ""
	I1217 02:08:59.595944 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.595952 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.595959 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:59.596021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:59.677470 1498704 cri.go:89] found id: ""
	I1217 02:08:59.677497 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.677506 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.677512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:59.677577 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:59.708285 1498704 cri.go:89] found id: ""
	I1217 02:08:59.708311 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.708320 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.708328 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:59.708388 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:59.735444 1498704 cri.go:89] found id: ""
	I1217 02:08:59.735466 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.735474 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.735481 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:59.735551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:59.758934 1498704 cri.go:89] found id: ""
	I1217 02:08:59.758956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.758964 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.758974 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.758985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.786487 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.786513 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.843688 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.843719 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.858632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.858661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.922844 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:59.922867 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:59.922888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.448942 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.459473 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:02.459570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:02.487463 1498704 cri.go:89] found id: ""
	I1217 02:09:02.487486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.487494 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.487529 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:02.487591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:02.516013 1498704 cri.go:89] found id: ""
	I1217 02:09:02.516038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.516047 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.516053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:02.516118 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:02.541783 1498704 cri.go:89] found id: ""
	I1217 02:09:02.541806 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.541814 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.541820 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:02.541876 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:02.566427 1498704 cri.go:89] found id: ""
	I1217 02:09:02.566450 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.566459 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.566465 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:02.566561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:02.590894 1498704 cri.go:89] found id: ""
	I1217 02:09:02.590917 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.590926 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.590932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:02.590998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:02.614645 1498704 cri.go:89] found id: ""
	I1217 02:09:02.614668 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.614677 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.614683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:02.614747 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:02.656626 1498704 cri.go:89] found id: ""
	I1217 02:09:02.656662 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.656671 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.656681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:02.656751 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:02.702753 1498704 cri.go:89] found id: ""
	I1217 02:09:02.702787 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.702796 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.702806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.702817 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.772243 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.772266 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:02.772278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.797608 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.797893 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:02.829032 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.829057 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:03.634729 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:06.135608 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:02.886939 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.886975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.401718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.412408 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:05.412488 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:05.441786 1498704 cri.go:89] found id: ""
	I1217 02:09:05.441821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.441830 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.441837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:05.441908 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:05.466385 1498704 cri.go:89] found id: ""
	I1217 02:09:05.466408 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.466416 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.466422 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:05.466481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:05.491033 1498704 cri.go:89] found id: ""
	I1217 02:09:05.491057 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.491066 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.491072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:05.491131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:05.515650 1498704 cri.go:89] found id: ""
	I1217 02:09:05.515675 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.515684 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.515691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:05.515753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:05.539973 1498704 cri.go:89] found id: ""
	I1217 02:09:05.539996 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.540004 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.540016 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:05.540077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:05.565317 1498704 cri.go:89] found id: ""
	I1217 02:09:05.565338 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.565347 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.565353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:05.565414 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:05.590136 1498704 cri.go:89] found id: ""
	I1217 02:09:05.590161 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.590169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.590176 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:05.590240 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:05.614696 1498704 cri.go:89] found id: ""
	I1217 02:09:05.614733 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.614742 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.614752 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.614762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.682980 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.683022 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.700674 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.700704 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.777617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.777635 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:05.777670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:05.803121 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.803155 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:08.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:10.635438 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:08.332434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.343036 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:08.343108 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:08.367411 1498704 cri.go:89] found id: ""
	I1217 02:09:08.367434 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.367443 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.367449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:08.367517 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:08.391668 1498704 cri.go:89] found id: ""
	I1217 02:09:08.391695 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.391704 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.391712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:08.391775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:08.415929 1498704 cri.go:89] found id: ""
	I1217 02:09:08.415953 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.415961 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.415968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:08.416050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:08.441685 1498704 cri.go:89] found id: ""
	I1217 02:09:08.441755 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.441779 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.441798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:08.441888 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:08.466687 1498704 cri.go:89] found id: ""
	I1217 02:09:08.466713 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.466722 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.466728 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:08.466808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:08.491044 1498704 cri.go:89] found id: ""
	I1217 02:09:08.491069 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.491078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.491085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:08.491190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:08.517483 1498704 cri.go:89] found id: ""
	I1217 02:09:08.517508 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.517517 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.517524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:08.517593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:08.543991 1498704 cri.go:89] found id: ""
	I1217 02:09:08.544017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.544026 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.544035 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.544053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.608510 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.608567 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.642989 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.643026 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.751212 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.751241 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:08.751254 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:08.779142 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.779180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.312760 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.327627 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:11.327714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:11.352557 1498704 cri.go:89] found id: ""
	I1217 02:09:11.352580 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.352588 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.352595 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:11.352654 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:11.378891 1498704 cri.go:89] found id: ""
	I1217 02:09:11.378913 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.378922 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.378928 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:11.378987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:11.403393 1498704 cri.go:89] found id: ""
	I1217 02:09:11.403416 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.403424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.403430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:11.403489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:11.432435 1498704 cri.go:89] found id: ""
	I1217 02:09:11.432459 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.432472 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.432479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:11.432565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:11.458410 1498704 cri.go:89] found id: ""
	I1217 02:09:11.458436 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.458445 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.458451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:11.458510 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:11.484113 1498704 cri.go:89] found id: ""
	I1217 02:09:11.484140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.484149 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.484156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:11.484216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:11.511088 1498704 cri.go:89] found id: ""
	I1217 02:09:11.511112 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.511121 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.511128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:11.511191 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:11.540295 1498704 cri.go:89] found id: ""
	I1217 02:09:11.540324 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.540333 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.540342 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.540354 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.554828 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.554857 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:11.615811 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:11.615835 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:11.615849 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:11.643999 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.644035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.696705 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.696733 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:13.134531 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:14.634797 1494358 node_ready.go:38] duration metric: took 6m0.000749408s for node "no-preload-178365" to be "Ready" ...
	I1217 02:09:14.638073 1494358 out.go:203] 
	W1217 02:09:14.640977 1494358 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:09:14.641013 1494358 out.go:285] * 
	W1217 02:09:14.643229 1494358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:09:14.646121 1494358 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348124275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348135139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348172948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348191221Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348204899Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348219340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348228637Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348243127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348261737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348290923Z" level=info msg="Connect containerd service"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348584284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.349144971Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.367921231Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368000485Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368028342Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368075579Z" level=info msg="Start recovering state"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409358181Z" level=info msg="Start event monitor"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409558676Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409664105Z" level=info msg="Start streaming server"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409753861Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409976198Z" level=info msg="runtime interface starting up..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410064724Z" level=info msg="starting plugins..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410151470Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:03:12 no-preload-178365 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.416611073Z" level=info msg="containerd successfully booted in 0.090598s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:16.206417    3927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:16.207357    3927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:16.209207    3927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:16.209553    3927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:16.211275    3927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:09:16 up  7:51,  0 user,  load average: 1.22, 0.92, 1.35
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:09:13 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:09:13 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 17 02:09:13 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:13 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:13 no-preload-178365 kubelet[3803]: E1217 02:09:13.916414    3803 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:09:13 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:09:13 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:09:14 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 17 02:09:14 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:14 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:14 no-preload-178365 kubelet[3809]: E1217 02:09:14.750749    3809 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:09:14 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:09:14 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:09:15 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 17 02:09:15 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:15 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:15 no-preload-178365 kubelet[3829]: E1217 02:09:15.446557    3829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:09:15 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:09:15 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:09:16 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 17 02:09:16 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:16 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:09:16 no-preload-178365 kubelet[3920]: E1217 02:09:16.173542    3920 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:09:16 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:09:16 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 2 (368.071125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (98.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 02:04:50.423487 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:04:59.951269 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:05:09.433979 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.95693789s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-456492
helpers_test.go:244: (dbg) docker inspect newest-cni-456492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	        "Created": "2025-12-17T01:55:16.478266179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1483846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:55:16.541817284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hosts",
	        "LogPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2-json.log",
	        "Name": "/newest-cni-456492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-456492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-456492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	                "LowerDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456492",
	                "Source": "/var/lib/docker/volumes/newest-cni-456492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456492",
	                "name.minikube.sigs.k8s.io": "newest-cni-456492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac9e3ec6660ef534c80ae9a62e4f8293e36270572d36ebc788f7c4f17de733d6",
	            "SandboxKey": "/var/run/docker/netns/ac9e3ec6660e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-456492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:75:ea:0b:2f:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78c732410c8ee8b3c147900aac111eb07f35c057f64efcecb5d20570fed785bc",
	                    "EndpointID": "b72674d5fca307f7a4a283c14f474eea6fa6df5ca3b748d3cb3d1f3fc33098ac",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456492",
	                        "72c4fe7eb784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 6 (309.950428ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:05:10.045875 1498191 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ delete  │ -p old-k8s-version-859530                                                                                                                                                                                                                                  │ old-k8s-version-859530       │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:52 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:52 UTC │ 17 Dec 25 01:53 UTC │
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:03:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:03:06.446138 1494358 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:03:06.446331 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446344 1494358 out.go:374] Setting ErrFile to fd 2...
	I1217 02:03:06.446349 1494358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:03:06.446613 1494358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:03:06.446996 1494358 out.go:368] Setting JSON to false
	I1217 02:03:06.447949 1494358 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27937,"bootTime":1765909050,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:03:06.448026 1494358 start.go:143] virtualization:  
	I1217 02:03:06.451183 1494358 out.go:179] * [no-preload-178365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:03:06.455055 1494358 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:03:06.455196 1494358 notify.go:221] Checking for updates...
	I1217 02:03:06.461067 1494358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:03:06.464077 1494358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:06.467522 1494358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:03:06.470660 1494358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:03:06.473573 1494358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:03:06.476917 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:06.477577 1494358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:03:06.504584 1494358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:03:06.504713 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.568470 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.558714769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.568580 1494358 docker.go:319] overlay module found
	I1217 02:03:06.571663 1494358 out.go:179] * Using the docker driver based on existing profile
	I1217 02:03:06.574409 1494358 start.go:309] selected driver: docker
	I1217 02:03:06.574441 1494358 start.go:927] validating driver "docker" against &{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.574538 1494358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:03:06.575218 1494358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:03:06.633705 1494358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:03:06.62420129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:03:06.634037 1494358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:03:06.634074 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:06.634136 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:06.634181 1494358 start.go:353] cluster config:
	{Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:06.637255 1494358 out.go:179] * Starting "no-preload-178365" primary control-plane node in "no-preload-178365" cluster
	I1217 02:03:06.640178 1494358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:03:06.642991 1494358 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:03:06.645784 1494358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:03:06.645819 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:06.645947 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:06.646262 1494358 cache.go:107] acquiring lock: {Name:mk4890d4b47ae1973de2f5e1f0682feb41ee40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646336 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 02:03:06.646344 1494358 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.402µs
	I1217 02:03:06.646356 1494358 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 02:03:06.646368 1494358 cache.go:107] acquiring lock: {Name:mk966096fd85af29d80d70ba567f975fd1c8ab20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646398 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:03:06.646403 1494358 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.063µs
	I1217 02:03:06.646410 1494358 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646419 1494358 cache.go:107] acquiring lock: {Name:mkf4d095c495df29849f640a0755588b041f7643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646446 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:03:06.646451 1494358 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.19µs
	I1217 02:03:06.646458 1494358 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646468 1494358 cache.go:107] acquiring lock: {Name:mk1c22383e6094d20d836c3a904bbbe609668a02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646495 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:03:06.646500 1494358 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.599µs
	I1217 02:03:06.646506 1494358 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646514 1494358 cache.go:107] acquiring lock: {Name:mkc3683c3186a723f5651545e5f013a6bc8b78e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646539 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1217 02:03:06.646545 1494358 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.074µs
	I1217 02:03:06.646552 1494358 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:03:06.646560 1494358 cache.go:107] acquiring lock: {Name:mk3a7027108fb6cda418f0aea932fdb404491198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646585 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 02:03:06.646589 1494358 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.105µs
	I1217 02:03:06.646596 1494358 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 02:03:06.646606 1494358 cache.go:107] acquiring lock: {Name:mkbcf0cf66af7f52acaeaf88186edd5961eb7fb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646635 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1217 02:03:06.646639 1494358 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 35.028µs
	I1217 02:03:06.646645 1494358 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1217 02:03:06.646653 1494358 cache.go:107] acquiring lock: {Name:mk85e5e85708e9527e64bdd95012aff390add343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.646678 1494358 cache.go:115] /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 02:03:06.646682 1494358 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.031µs
	I1217 02:03:06.646688 1494358 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 02:03:06.646693 1494358 cache.go:87] Successfully saved all images to host disk.
	I1217 02:03:06.665484 1494358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:03:06.665506 1494358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:03:06.665526 1494358 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:03:06.665557 1494358 start.go:360] acquireMachinesLock for no-preload-178365: {Name:mkd4a1763d090ac24f95097d34ac035f597ec2f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:03:06.665618 1494358 start.go:364] duration metric: took 39.672µs to acquireMachinesLock for "no-preload-178365"
	I1217 02:03:06.665659 1494358 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:03:06.665665 1494358 fix.go:54] fixHost starting: 
	I1217 02:03:06.665948 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.681763 1494358 fix.go:112] recreateIfNeeded on no-preload-178365: state=Stopped err=<nil>
	W1217 02:03:06.681790 1494358 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:03:06.685089 1494358 out.go:252] * Restarting existing docker container for "no-preload-178365" ...
	I1217 02:03:06.685169 1494358 cli_runner.go:164] Run: docker start no-preload-178365
	I1217 02:03:06.958594 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:06.983526 1494358 kic.go:430] container "no-preload-178365" state is running.
	I1217 02:03:06.983925 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:07.006615 1494358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/config.json ...
	I1217 02:03:07.006877 1494358 machine.go:94] provisionDockerMachine start ...
	I1217 02:03:07.006940 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:07.027938 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:07.028270 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:07.028285 1494358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:03:07.028921 1494358 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42386->127.0.0.1:34254: read: connection reset by peer
	I1217 02:03:10.169609 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.169636 1494358 ubuntu.go:182] provisioning hostname "no-preload-178365"
	I1217 02:03:10.169740 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.194161 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.194504 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.194521 1494358 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-178365 && echo "no-preload-178365" | sudo tee /etc/hostname
	I1217 02:03:10.335145 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178365
	
	I1217 02:03:10.335254 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.353261 1494358 main.go:143] libmachine: Using SSH client type: native
	I1217 02:03:10.353619 1494358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1217 02:03:10.353703 1494358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178365/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:03:10.485869 1494358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:03:10.485894 1494358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:03:10.485923 1494358 ubuntu.go:190] setting up certificates
	I1217 02:03:10.485939 1494358 provision.go:84] configureAuth start
	I1217 02:03:10.485997 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:10.502661 1494358 provision.go:143] copyHostCerts
	I1217 02:03:10.502746 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:03:10.502761 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:03:10.502842 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:03:10.502943 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:03:10.502955 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:03:10.502981 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:03:10.503037 1494358 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:03:10.503046 1494358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:03:10.503070 1494358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:03:10.503118 1494358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.no-preload-178365 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-178365]
	I1217 02:03:10.769670 1494358 provision.go:177] copyRemoteCerts
	I1217 02:03:10.769739 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:03:10.769777 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.789688 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:10.886311 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:03:10.907152 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:03:10.927302 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:03:10.945784 1494358 provision.go:87] duration metric: took 459.830227ms to configureAuth
	I1217 02:03:10.945813 1494358 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:03:10.946051 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:10.946065 1494358 machine.go:97] duration metric: took 3.939178962s to provisionDockerMachine
	I1217 02:03:10.946075 1494358 start.go:293] postStartSetup for "no-preload-178365" (driver="docker")
	I1217 02:03:10.946086 1494358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:03:10.946141 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:03:10.946189 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:10.963795 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.062181 1494358 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:03:11.066171 1494358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:03:11.066203 1494358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:03:11.066214 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:03:11.066271 1494358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:03:11.066354 1494358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:03:11.066460 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:03:11.074455 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:11.096806 1494358 start.go:296] duration metric: took 150.715868ms for postStartSetup
	I1217 02:03:11.096935 1494358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:03:11.096985 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.115904 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.210914 1494358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:03:11.216447 1494358 fix.go:56] duration metric: took 4.550774061s for fixHost
	I1217 02:03:11.216474 1494358 start.go:83] releasing machines lock for "no-preload-178365", held for 4.550845758s
	I1217 02:03:11.216552 1494358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178365
	I1217 02:03:11.234013 1494358 ssh_runner.go:195] Run: cat /version.json
	I1217 02:03:11.234074 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.234105 1494358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:03:11.234160 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:11.254634 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.261745 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:11.349529 1494358 ssh_runner.go:195] Run: systemctl --version
	I1217 02:03:11.444567 1494358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:03:11.448907 1494358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:03:11.448999 1494358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:03:11.456651 1494358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:03:11.456676 1494358 start.go:496] detecting cgroup driver to use...
	I1217 02:03:11.456715 1494358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:03:11.456766 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:03:11.474180 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:03:11.487871 1494358 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:03:11.487945 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:03:11.503199 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:03:11.516179 1494358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:03:11.649581 1494358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:03:11.774192 1494358 docker.go:234] disabling docker service ...
	I1217 02:03:11.774263 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:03:11.789517 1494358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:03:11.802804 1494358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:03:11.921518 1494358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:03:12.041333 1494358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:03:12.054806 1494358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:03:12.068814 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:03:12.078910 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:03:12.088243 1494358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:03:12.088356 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:03:12.097152 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.106832 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:03:12.116858 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:03:12.126506 1494358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:03:12.134817 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:03:12.143713 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:03:12.152423 1494358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:03:12.161395 1494358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:03:12.169023 1494358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:03:12.176758 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.290497 1494358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:03:12.413211 1494358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:03:12.413339 1494358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:03:12.419446 1494358 start.go:564] Will wait 60s for crictl version
	I1217 02:03:12.419560 1494358 ssh_runner.go:195] Run: which crictl
	I1217 02:03:12.423782 1494358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:03:12.453204 1494358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:03:12.453355 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.477890 1494358 ssh_runner.go:195] Run: containerd --version
	I1217 02:03:12.502488 1494358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:03:12.505409 1494358 cli_runner.go:164] Run: docker network inspect no-preload-178365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:03:12.525803 1494358 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 02:03:12.529636 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.539141 1494358 kubeadm.go:884] updating cluster {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:03:12.539268 1494358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:03:12.539323 1494358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:03:12.567893 1494358 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:03:12.567915 1494358 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:03:12.567927 1494358 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:03:12.568032 1494358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-178365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:03:12.568100 1494358 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:03:12.593237 1494358 cni.go:84] Creating CNI manager for ""
	I1217 02:03:12.593259 1494358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:03:12.593281 1494358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:03:12.593303 1494358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178365 NodeName:no-preload-178365 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:03:12.593419 1494358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-178365"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:03:12.593487 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:03:12.601250 1494358 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:03:12.601320 1494358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:03:12.608723 1494358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:03:12.621096 1494358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:03:12.634046 1494358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1217 02:03:12.646740 1494358 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:03:12.650274 1494358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:03:12.660396 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:12.777431 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:12.794901 1494358 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365 for IP: 192.168.76.2
	I1217 02:03:12.794977 1494358 certs.go:195] generating shared ca certs ...
	I1217 02:03:12.795010 1494358 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:12.795186 1494358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:03:12.795275 1494358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:03:12.795305 1494358 certs.go:257] generating profile certs ...
	I1217 02:03:12.795455 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/client.key
	I1217 02:03:12.795549 1494358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key.2535d4d2
	I1217 02:03:12.795620 1494358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key
	I1217 02:03:12.795764 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:03:12.795825 1494358 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:03:12.795852 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:03:12.795904 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:03:12.795962 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:03:12.796010 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:03:12.796087 1494358 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:03:12.796737 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:03:12.814980 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:03:12.832753 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:03:12.850216 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:03:12.868173 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:03:12.886289 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 02:03:12.903326 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:03:12.920371 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/no-preload-178365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:03:12.940578 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:03:12.957601 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:03:12.974697 1494358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:03:12.991288 1494358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:03:13.004811 1494358 ssh_runner.go:195] Run: openssl version
	I1217 02:03:13.011807 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.019338 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:03:13.027129 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030736 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.030806 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:03:13.071860 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:03:13.079209 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.086171 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:03:13.093446 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.097994 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.098062 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:03:13.140311 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:03:13.148478 1494358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.156400 1494358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:03:13.164489 1494358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168307 1494358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.168376 1494358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:03:13.213768 1494358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:03:13.221877 1494358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:03:13.225450 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:03:13.267131 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:03:13.308825 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:03:13.351204 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:03:13.393248 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:03:13.434439 1494358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:03:13.475429 1494358 kubeadm.go:401] StartCluster: {Name:no-preload-178365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-178365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:03:13.475532 1494358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:03:13.475608 1494358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:03:13.504535 1494358 cri.go:89] found id: ""
	I1217 02:03:13.504615 1494358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:03:13.512496 1494358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:03:13.512516 1494358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:03:13.512598 1494358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:03:13.520493 1494358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:03:13.520944 1494358 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-178365" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.521050 1494358 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-178365" cluster setting kubeconfig missing "no-preload-178365" context setting]
	I1217 02:03:13.521320 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.522699 1494358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:03:13.530620 1494358 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 02:03:13.530655 1494358 kubeadm.go:602] duration metric: took 18.132356ms to restartPrimaryControlPlane
	I1217 02:03:13.530665 1494358 kubeadm.go:403] duration metric: took 55.248466ms to StartCluster
	I1217 02:03:13.530680 1494358 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.530739 1494358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:03:13.531369 1494358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:03:13.531580 1494358 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:03:13.531879 1494358 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:03:13.531927 1494358 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:03:13.531992 1494358 addons.go:70] Setting storage-provisioner=true in profile "no-preload-178365"
	I1217 02:03:13.532007 1494358 addons.go:239] Setting addon storage-provisioner=true in "no-preload-178365"
	I1217 02:03:13.532031 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.532492 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.532869 1494358 addons.go:70] Setting dashboard=true in profile "no-preload-178365"
	I1217 02:03:13.532892 1494358 addons.go:239] Setting addon dashboard=true in "no-preload-178365"
	W1217 02:03:13.532899 1494358 addons.go:248] addon dashboard should already be in state true
	I1217 02:03:13.532921 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.533338 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.534314 1494358 addons.go:70] Setting default-storageclass=true in profile "no-preload-178365"
	I1217 02:03:13.534373 1494358 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-178365"
	I1217 02:03:13.534686 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.538786 1494358 out.go:179] * Verifying Kubernetes components...
	I1217 02:03:13.541864 1494358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:03:13.565785 1494358 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:03:13.568681 1494358 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.568703 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:03:13.568768 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.578846 1494358 addons.go:239] Setting addon default-storageclass=true in "no-preload-178365"
	I1217 02:03:13.578886 1494358 host.go:66] Checking if "no-preload-178365" exists ...
	I1217 02:03:13.579340 1494358 cli_runner.go:164] Run: docker container inspect no-preload-178365 --format={{.State.Status}}
	I1217 02:03:13.579557 1494358 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:03:13.582535 1494358 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:03:13.585382 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:03:13.585433 1494358 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:03:13.585542 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.608796 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.639282 1494358 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.639307 1494358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:03:13.639371 1494358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178365
	I1217 02:03:13.653415 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.673307 1494358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/no-preload-178365/id_rsa Username:docker}
	I1217 02:03:13.775641 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:13.801572 1494358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:03:13.824171 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:03:13.824193 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:03:13.841637 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:03:13.841671 1494358 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:03:13.855261 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:03:13.855283 1494358 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:03:13.874375 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:03:13.874398 1494358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1217 02:03:13.875947 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.875994 1494358 retry.go:31] will retry after 288.181294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:13.892373 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:13.907211 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:03:13.907237 1494358 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:03:13.935844 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:03:13.935871 1494358 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:03:13.961470 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:03:13.961495 1494358 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:03:13.976000 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:03:13.976025 1494358 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:03:13.992266 1494358 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:13.992291 1494358 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:03:14.009756 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:03:14.164994 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:14.633552 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633588 1494358 retry.go:31] will retry after 357.626005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633797 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633814 1494358 retry.go:31] will retry after 154.442663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:14.633867 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633885 1494358 retry.go:31] will retry after 536.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.633975 1494358 node_ready.go:35] waiting up to 6m0s for node "no-preload-178365" to be "Ready" ...
	I1217 02:03:14.788822 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:14.850646 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.850682 1494358 retry.go:31] will retry after 194.97222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:14.992099 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:03:15.046507 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.089856 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.089896 1494358 retry.go:31] will retry after 200.825401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:15.123044 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.123092 1494358 retry.go:31] will retry after 471.273084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.171850 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:15.233255 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.233288 1494358 retry.go:31] will retry after 740.372196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.291633 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:15.354957 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.354993 1494358 retry.go:31] will retry after 685.879549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.595477 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:15.661175 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.661206 1494358 retry.go:31] will retry after 918.180528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:15.974527 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.041010 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:16.041109 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.041153 1494358 retry.go:31] will retry after 922.351729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:16.101618 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.101745 1494358 retry.go:31] will retry after 895.690357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.580236 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:16.635003 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:16.644295 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.644331 1494358 retry.go:31] will retry after 1.757458355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:16.963859 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:03:16.998199 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.029017 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.029053 1494358 retry.go:31] will retry after 1.200975191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:17.065693 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.065740 1494358 retry.go:31] will retry after 733.467842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.799468 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:17.857813 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:17.857844 1494358 retry.go:31] will retry after 1.598089082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.230995 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:18.288826 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.288856 1494358 retry.go:31] will retry after 1.072359143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.402269 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:18.499311 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:18.499346 1494358 retry.go:31] will retry after 1.974986181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:19.135143 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:19.361610 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:19.424580 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.424614 1494358 retry.go:31] will retry after 2.619930526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.456891 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:19.529540 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:19.529572 1494358 retry.go:31] will retry after 4.103816404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.475130 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:20.538062 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:20.538103 1494358 retry.go:31] will retry after 4.176264138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:21.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:22.045549 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:22.113264 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:22.113297 1494358 retry.go:31] will retry after 6.243728004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.634510 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:23.724320 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:23.724355 1494358 retry.go:31] will retry after 2.344494398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:24.135189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:24.715564 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:24.778897 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:24.778930 1494358 retry.go:31] will retry after 6.21195427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.069135 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:26.129417 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:26.129453 1494358 retry.go:31] will retry after 7.88915894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:30.712264 1483412 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000165s
	I1217 02:03:30.712292 1483412 kubeadm.go:319] 
	I1217 02:03:30.712354 1483412 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:03:30.712387 1483412 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:03:30.712502 1483412 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:03:30.712534 1483412 kubeadm.go:319] 
	I1217 02:03:30.712837 1483412 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:03:30.712881 1483412 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:03:30.712921 1483412 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:03:30.712927 1483412 kubeadm.go:319] 
	I1217 02:03:30.716667 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:03:30.717232 1483412 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:03:30.717376 1483412 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:03:30.717666 1483412 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:03:30.717678 1483412 kubeadm.go:319] 
	I1217 02:03:30.717747 1483412 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:03:30.717804 1483412 kubeadm.go:403] duration metric: took 8m6.069034531s to StartCluster
	I1217 02:03:30.717842 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:03:30.717911 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:03:30.742283 1483412 cri.go:89] found id: ""
	I1217 02:03:30.742310 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.742319 1483412 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:03:30.742326 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:03:30.742390 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:03:30.768192 1483412 cri.go:89] found id: ""
	I1217 02:03:30.768214 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.768223 1483412 logs.go:284] No container was found matching "etcd"
	I1217 02:03:30.768229 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:03:30.768289 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:03:30.792035 1483412 cri.go:89] found id: ""
	I1217 02:03:30.792057 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.792065 1483412 logs.go:284] No container was found matching "coredns"
	I1217 02:03:30.792071 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:03:30.792131 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:03:30.816804 1483412 cri.go:89] found id: ""
	I1217 02:03:30.816825 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.816833 1483412 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:03:30.816840 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:03:30.816896 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:03:30.841903 1483412 cri.go:89] found id: ""
	I1217 02:03:30.841925 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.841934 1483412 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:03:30.841940 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:03:30.841996 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:03:30.865941 1483412 cri.go:89] found id: ""
	I1217 02:03:30.866019 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.866042 1483412 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:03:30.866062 1483412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:03:30.866154 1483412 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:03:30.890127 1483412 cri.go:89] found id: ""
	I1217 02:03:30.890151 1483412 logs.go:282] 0 containers: []
	W1217 02:03:30.890160 1483412 logs.go:284] No container was found matching "kindnet"
	I1217 02:03:30.890169 1483412 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:03:30.890180 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:03:30.956000 1483412 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:03:30.947697    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.948483    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950040    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.950553    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:30.952095    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:03:30.956023 1483412 logs.go:123] Gathering logs for containerd ...
	I1217 02:03:30.956037 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:03:30.994676 1483412 logs.go:123] Gathering logs for container status ...
	I1217 02:03:30.994742 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:03:31.049766 1483412 logs.go:123] Gathering logs for kubelet ...
	I1217 02:03:31.049839 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:03:31.115613 1483412 logs.go:123] Gathering logs for dmesg ...
	I1217 02:03:31.115651 1483412 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:03:31.155185 1483412 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:03:31.155235 1483412 out.go:285] * 
	W1217 02:03:31.155286 1483412 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.155303 1483412 out.go:285] * 
	W1217 02:03:31.157437 1483412 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:03:31.162495 1483412 out.go:203] 
	W1217 02:03:31.166505 1483412 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:03:31.166566 1483412 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:03:31.166589 1483412 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:03:31.169784 1483412 out.go:203] 
	W1217 02:03:26.635049 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:28.357601 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:28.414285 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:28.414314 1494358 retry.go:31] will retry after 8.141385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:29.135171 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:30.991983 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:31.086857 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:31.086888 1494358 retry.go:31] will retry after 8.346677944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:31.634434 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:33.635098 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:34.019715 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:34.102746 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:34.102779 1494358 retry.go:31] will retry after 12.223918915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:35.635237 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:36.555986 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:36.613803 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:36.613840 1494358 retry.go:31] will retry after 13.520296046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:37.635411 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:39.434738 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:39.504274 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:39.504305 1494358 retry.go:31] will retry after 11.467503434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:40.134513 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:42.134733 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:44.634504 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:46.326949 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:46.388052 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:46.388081 1494358 retry.go:31] will retry after 12.584899893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:46.634980 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:49.135190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:50.134912 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:03:50.199930 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:50.199964 1494358 retry.go:31] will retry after 18.31448087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:50.972298 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:03:51.035555 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:51.035596 1494358 retry.go:31] will retry after 17.961716988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:03:51.635239 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:54.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:56.135361 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:03:58.635047 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:03:58.973235 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:03:59.036686 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:03:59.036719 1494358 retry.go:31] will retry after 12.655603579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:00.635471 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:03.135287 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:05.135412 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:07.635430 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:08.515014 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:04:08.573240 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:08.573272 1494358 retry.go:31] will retry after 21.601228237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:08.998393 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:09.061840 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:09.061874 1494358 retry.go:31] will retry after 17.025396452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:09.635497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:11.692476 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:04:11.748218 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:11.748251 1494358 retry.go:31] will retry after 27.44869176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:12.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:14.635195 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:17.135236 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:19.635297 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:22.135202 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:24.135479 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:26.088208 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:26.153388 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:04:26.153423 1494358 retry.go:31] will retry after 19.325825262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:26.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:29.135233 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:30.175522 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:04:30.235853 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:30.235960 1494358 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:04:31.635004 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:34.134551 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:36.135469 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:38.635140 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:39.197500 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:04:39.262887 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:39.262988 1494358 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:04:40.635609 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:43.134500 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:45.135676 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:04:45.480145 1494358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:04:45.552840 1494358 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:04:45.552964 1494358 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:04:45.556171 1494358 out.go:179] * Enabled addons: 
	I1217 02:04:45.558951 1494358 addons.go:530] duration metric: took 1m32.027017156s for enable addons: enabled=[]
	W1217 02:04:47.635184 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:49.635373 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:52.135104 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:54.634638 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:57.134475 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:04:59.135391 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:01.635489 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:04.135231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682286175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682307123Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682371829Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682402894Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682419543Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682442197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682466124Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682483519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682502038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682554362Z" level=info msg="Connect containerd service"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.682995064Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.683809119Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.695949932Z" level=info msg="Start subscribing containerd event"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696048682Z" level=info msg="Start recovering state"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696295371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.696416242Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.735993632Z" level=info msg="Start event monitor"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736047688Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736057895Z" level=info msg="Start streaming server"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736067077Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736078375Z" level=info msg="runtime interface starting up..."
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736095212Z" level=info msg="starting plugins..."
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.736110121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 01:55:22 newest-cni-456492 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 01:55:22 newest-cni-456492 containerd[757]: time="2025-12-17T01:55:22.737713198Z" level=info msg="containerd successfully booted in 0.087876s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:05:10.736129    5969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:10.736685    5969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:10.738240    5969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:10.738584    5969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:10.740038    5969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:05:10 up  7:47,  0 user,  load average: 0.34, 0.83, 1.48
	Linux newest-cni-456492 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:07 newest-cni-456492 kubelet[5850]: E1217 02:05:07.937636    5850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:07 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:08 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 450.
	Dec 17 02:05:08 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:08 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:08 newest-cni-456492 kubelet[5855]: E1217 02:05:08.679170    5855 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:08 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:08 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:09 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 17 02:05:09 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:09 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:09 newest-cni-456492 kubelet[5860]: E1217 02:05:09.427092    5860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:09 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:09 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:10 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 452.
	Dec 17 02:05:10 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:10 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:10 newest-cni-456492 kubelet[5885]: E1217 02:05:10.229865    5885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:10 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:10 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 6 (345.781014ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:05:11.312197 1498421 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-456492" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (98.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (373.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1217 02:05:35.918174 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:33.442167 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:56.877379 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m7.999889977s)

                                                
                                                
-- stdout --
	* [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 02:05:12.850501 1498704 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:05:12.850637 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.850649 1498704 out.go:374] Setting ErrFile to fd 2...
	I1217 02:05:12.850655 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.851041 1498704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:05:12.851511 1498704 out.go:368] Setting JSON to false
	I1217 02:05:12.852479 1498704 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28063,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:05:12.852572 1498704 start.go:143] virtualization:  
	I1217 02:05:12.855474 1498704 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:05:12.857672 1498704 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:12.857773 1498704 notify.go:221] Checking for updates...
	I1217 02:05:12.863254 1498704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:12.866037 1498704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:12.868948 1498704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:05:12.871863 1498704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:05:12.874787 1498704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:12.878103 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:12.878662 1498704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:12.900447 1498704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:05:12.900598 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:12.960234 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:12.950894493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:12.960347 1498704 docker.go:319] overlay module found
	I1217 02:05:12.963370 1498704 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:12.966210 1498704 start.go:309] selected driver: docker
	I1217 02:05:12.966233 1498704 start.go:927] validating driver "docker" against &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:12.966382 1498704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:12.967091 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:13.019814 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:13.010546439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:13.020178 1498704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:05:13.020210 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:13.020262 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:13.020307 1498704 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:13.023434 1498704 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 02:05:13.026234 1498704 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:05:13.029131 1498704 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:13.031994 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:13.032048 1498704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 02:05:13.032060 1498704 cache.go:65] Caching tarball of preloaded images
	I1217 02:05:13.032113 1498704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:13.032150 1498704 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:05:13.032162 1498704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 02:05:13.032281 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.052501 1498704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:13.052525 1498704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:13.052542 1498704 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:13.052572 1498704 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:13.052633 1498704 start.go:364] duration metric: took 38.597µs to acquireMachinesLock for "newest-cni-456492"
	I1217 02:05:13.052657 1498704 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:13.052663 1498704 fix.go:54] fixHost starting: 
	I1217 02:05:13.052926 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.069585 1498704 fix.go:112] recreateIfNeeded on newest-cni-456492: state=Stopped err=<nil>
	W1217 02:05:13.069617 1498704 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:05:13.072747 1498704 out.go:252] * Restarting existing docker container for "newest-cni-456492" ...
	I1217 02:05:13.072837 1498704 cli_runner.go:164] Run: docker start newest-cni-456492
	I1217 02:05:13.388698 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.414091 1498704 kic.go:430] container "newest-cni-456492" state is running.
	I1217 02:05:13.414525 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:13.433261 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.433961 1498704 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:13.434162 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:13.455043 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:13.455367 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:13.455376 1498704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:13.456190 1498704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:16.589394 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.589424 1498704 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 02:05:16.589509 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.608291 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.608611 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.608628 1498704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 02:05:16.748318 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.748417 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.766749 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.767082 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.767106 1498704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:16.899757 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:16.899788 1498704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:05:16.899820 1498704 ubuntu.go:190] setting up certificates
	I1217 02:05:16.899839 1498704 provision.go:84] configureAuth start
	I1217 02:05:16.899906 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:16.924665 1498704 provision.go:143] copyHostCerts
	I1217 02:05:16.924743 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:05:16.924752 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:05:16.924828 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:05:16.924938 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:05:16.924943 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:05:16.924976 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:05:16.925038 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:05:16.925047 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:05:16.925072 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:05:16.925127 1498704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 02:05:17.601803 1498704 provision.go:177] copyRemoteCerts
	I1217 02:05:17.601873 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:17.601926 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.636357 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.741722 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:05:17.761034 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:17.779707 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:17.797837 1498704 provision.go:87] duration metric: took 897.968313ms to configureAuth
	I1217 02:05:17.797870 1498704 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:17.798087 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:17.798100 1498704 machine.go:97] duration metric: took 4.364124237s to provisionDockerMachine
	I1217 02:05:17.798118 1498704 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 02:05:17.798134 1498704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:17.798198 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:17.798254 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.815970 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.909838 1498704 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:17.913351 1498704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:17.913383 1498704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:17.913395 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:05:17.913453 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:05:17.913544 1498704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:05:17.913681 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:17.921360 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:17.939679 1498704 start.go:296] duration metric: took 141.5414ms for postStartSetup
	I1217 02:05:17.939826 1498704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:17.939877 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.957594 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.059706 1498704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:18.065122 1498704 fix.go:56] duration metric: took 5.012436797s for fixHost
	I1217 02:05:18.065156 1498704 start.go:83] releasing machines lock for "newest-cni-456492", held for 5.012509749s
	I1217 02:05:18.065242 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:18.082756 1498704 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:18.082825 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.083064 1498704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:05:18.083126 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.102210 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.102306 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.193581 1498704 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:18.286865 1498704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:18.291506 1498704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:18.291604 1498704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:18.301001 1498704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:18.301023 1498704 start.go:496] detecting cgroup driver to use...
	I1217 02:05:18.301056 1498704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:18.301104 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:05:18.318916 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:18.332388 1498704 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:05:18.332450 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:05:18.348560 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:05:18.361841 1498704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:05:18.501489 1498704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:05:18.625467 1498704 docker.go:234] disabling docker service ...
	I1217 02:05:18.625544 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:05:18.642408 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:05:18.656014 1498704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:05:18.765362 1498704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:05:18.886790 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:18.900617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:18.915221 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:18.924900 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:18.934313 1498704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:18.934389 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:18.943795 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.953183 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:18.962127 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.971122 1498704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:18.979419 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:18.988380 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:18.999817 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:19.010244 1498704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:19.018996 1498704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:19.026929 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.133908 1498704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:19.268405 1498704 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:05:19.268490 1498704 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:05:19.272284 1498704 start.go:564] Will wait 60s for crictl version
	I1217 02:05:19.272347 1498704 ssh_runner.go:195] Run: which crictl
	I1217 02:05:19.275756 1498704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:19.301130 1498704 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:05:19.301201 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.322372 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.348617 1498704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:05:19.351633 1498704 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:05:19.367774 1498704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:05:19.371830 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.384786 1498704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:05:19.387816 1498704 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:19.387972 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:19.388067 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.414283 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.414309 1498704 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:05:19.414396 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.439246 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.439272 1498704 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:19.439280 1498704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:05:19.439400 1498704 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:19.439475 1498704 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:05:19.464932 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:19.464957 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:19.464978 1498704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:05:19.465000 1498704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:19.465118 1498704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:19.465204 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:19.473220 1498704 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:19.473323 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:19.481191 1498704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:05:19.494733 1498704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:19.508255 1498704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 02:05:19.521299 1498704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:19.524923 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.534869 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.640328 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:19.658104 1498704 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 02:05:19.658171 1498704 certs.go:195] generating shared ca certs ...
	I1217 02:05:19.658202 1498704 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:19.658408 1498704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:05:19.658487 1498704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:05:19.658525 1498704 certs.go:257] generating profile certs ...
	I1217 02:05:19.658693 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 02:05:19.658805 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 02:05:19.658882 1498704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 02:05:19.659021 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:05:19.659079 1498704 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:19.659103 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:05:19.659164 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:05:19.659220 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:05:19.659286 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:05:19.659364 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:19.660007 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:19.680759 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:05:19.702848 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:19.724636 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:05:19.743745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:19.766745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:19.785567 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:19.805217 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:05:19.823885 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:05:19.842565 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:05:19.861136 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:19.881009 1498704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:19.900011 1498704 ssh_runner.go:195] Run: openssl version
	I1217 02:05:19.907885 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.916589 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:19.925294 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929759 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929879 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.973048 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:19.981056 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:05:19.988859 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:05:19.996704 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001580 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001857 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.047306 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:20.055839 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.063938 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:05:20.072095 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076535 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076605 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.118765 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:20.126976 1498704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:20.131206 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:20.172934 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:20.214362 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:20.255854 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:20.297036 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:20.339864 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:20.381722 1498704 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:20.381822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:05:20.381904 1498704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:05:20.424644 1498704 cri.go:89] found id: ""
	I1217 02:05:20.424764 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:20.433427 1498704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:20.433456 1498704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:20.433550 1498704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:20.441251 1498704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:20.442099 1498704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.442456 1498704 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456492" cluster setting kubeconfig missing "newest-cni-456492" context setting]
	I1217 02:05:20.442986 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.445078 1498704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:20.453918 1498704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:05:20.453968 1498704 kubeadm.go:602] duration metric: took 20.505601ms to restartPrimaryControlPlane
	I1217 02:05:20.453978 1498704 kubeadm.go:403] duration metric: took 72.266987ms to StartCluster
	I1217 02:05:20.453993 1498704 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.454058 1498704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.454938 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.455145 1498704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:05:20.455516 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:20.455530 1498704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:20.455683 1498704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456492"
	I1217 02:05:20.455704 1498704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456492"
	I1217 02:05:20.455734 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456291 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.456447 1498704 addons.go:70] Setting dashboard=true in profile "newest-cni-456492"
	I1217 02:05:20.456459 1498704 addons.go:239] Setting addon dashboard=true in "newest-cni-456492"
	W1217 02:05:20.456465 1498704 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:20.456487 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456873 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.457295 1498704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456492"
	I1217 02:05:20.457327 1498704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456492"
	I1217 02:05:20.457617 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.460758 1498704 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:20.464032 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:20.511072 1498704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:20.511238 1498704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:20.511526 1498704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456492"
	I1217 02:05:20.511584 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.512215 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.514400 1498704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.514426 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:20.514495 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.517419 1498704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:05:20.520345 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:20.520380 1498704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:20.520470 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.545933 1498704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.545958 1498704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:20.546028 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.571506 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.597655 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.610038 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.744231 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:20.749535 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.770211 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.807578 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:20.807656 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:20.822894 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:20.822966 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:20.838508 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:20.838583 1498704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:20.854473 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:20.854546 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:20.870442 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:20.870510 1498704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:20.892689 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:20.892763 1498704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:20.907212 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:20.907283 1498704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:05:20.920377 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:20.920447 1498704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:20.934242 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:20.934313 1498704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:20.949356 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:21.122136 1498704 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:05:21.122238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:21.122377 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122428 1498704 retry.go:31] will retry after 140.698925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122498 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122514 1498704 retry.go:31] will retry after 200.872114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122730 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122750 1498704 retry.go:31] will retry after 347.753215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.264115 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.324524 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:21.326955 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.326987 1498704 retry.go:31] will retry after 509.503403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.390952 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.391056 1498704 retry.go:31] will retry after 486.50092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.471226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.536155 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.536193 1498704 retry.go:31] will retry after 374.340896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.623199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:21.836797 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.878378 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:21.911452 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.932525 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.932573 1498704 retry.go:31] will retry after 673.446858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.024062 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.024104 1498704 retry.go:31] will retry after 357.640722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.030810 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.030855 1498704 retry.go:31] will retry after 697.108634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.122842 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:22.382402 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.447494 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.447529 1498704 retry.go:31] will retry after 907.58474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.606794 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:22.623237 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:22.712284 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.712316 1498704 retry.go:31] will retry after 1.166453431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.728640 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.790257 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.790294 1498704 retry.go:31] will retry after 693.242896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.122710 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.356122 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:23.441808 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.441876 1498704 retry.go:31] will retry after 812.660244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.484193 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:23.553009 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.553088 1498704 retry.go:31] will retry after 1.540590446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.878932 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:23.940625 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.940657 1498704 retry.go:31] will retry after 1.715347401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.123129 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:24.255570 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.318166 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.318201 1498704 retry.go:31] will retry after 2.528105033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.622416 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.094702 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.122740 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:25.190434 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.190468 1498704 retry.go:31] will retry after 2.137532007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.622874 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.656976 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.735191 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.735228 1498704 retry.go:31] will retry after 1.824141068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.122718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.622402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.847039 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:26.915825 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.915864 1498704 retry.go:31] will retry after 3.628983163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.123109 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:27.329106 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:27.406949 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.406981 1498704 retry.go:31] will retry after 4.03347247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.560441 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:27.620941 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.620972 1498704 retry.go:31] will retry after 3.991176553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.623048 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.123323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.622690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.123056 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.622383 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.122331 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.545057 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:30.621785 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.621822 1498704 retry.go:31] will retry after 4.4452238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.622853 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.122373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.440743 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:31.509992 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.510031 1498704 retry.go:31] will retry after 5.407597033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.613135 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:31.622584 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:31.697739 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.697776 1498704 retry.go:31] will retry after 2.825488937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:32.122427 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:32.622356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.122865 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.622376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.122833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.523532 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:34.583134 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.583163 1498704 retry.go:31] will retry after 5.545323918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.622442 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:35.068147 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:35.122850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:35.134133 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.134169 1498704 retry.go:31] will retry after 4.861802964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.622377 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.122369 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.622378 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.918683 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:36.978447 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.978481 1498704 retry.go:31] will retry after 6.962519237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:37.122560 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:37.622836 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.122524 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.622862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.122871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.623166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.996206 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:40.063255 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.063292 1498704 retry.go:31] will retry after 7.781680021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.122526 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:40.129164 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:40.214505 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.214533 1498704 retry.go:31] will retry after 8.678807682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.622298 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.122333 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.622358 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.127159 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.622438 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.122461 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.622352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.941994 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:44.001689 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.001730 1498704 retry.go:31] will retry after 6.066883065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.123123 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:44.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.126164 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.623052 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.122898 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.622334 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.122393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.622323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.845223 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:47.908667 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:47.908705 1498704 retry.go:31] will retry after 18.007710991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.122861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.622412 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.894229 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:48.969090 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.969125 1498704 retry.go:31] will retry after 16.055685136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.122381 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:49.622837 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:50.069336 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:50.122996 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:50.134357 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.134397 1498704 retry.go:31] will retry after 18.576318696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.622399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.122356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.623152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.122522 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.622365 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.123228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.622373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.622394 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.122388 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.122434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.622357 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.122345 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.622407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.122690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.622871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.122944 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.622822 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.123626 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.623133 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.122517 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.622861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.122995 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.622415 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.122366 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.623001 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.122805 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.622382 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.025226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:05.088234 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.088268 1498704 retry.go:31] will retry after 18.521411157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.122353 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.622518 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.916578 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:05.977704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.977737 1498704 retry.go:31] will retry after 29.235613176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.123051 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:06.623116 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.122863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.622361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.123131 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.622326 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.711597 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:08.773115 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:08.773147 1498704 retry.go:31] will retry after 24.92518591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:09.122643 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:09.622393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.122375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.622634 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.122959 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.622850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.122346 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.622435 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.122648 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.622828 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.123317 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.622872 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.122361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.622296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.622835 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.122778 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.123152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.623163 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.122407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.622841 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.123196 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.622898 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:20.622982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:20.655063 1498704 cri.go:89] found id: ""
	I1217 02:06:20.655091 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.655100 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:20.655106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:20.655169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:20.687901 1498704 cri.go:89] found id: ""
	I1217 02:06:20.687924 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.687932 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:20.687938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:20.687996 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:20.713818 1498704 cri.go:89] found id: ""
	I1217 02:06:20.713845 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.713854 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:20.713860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:20.713918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:20.738353 1498704 cri.go:89] found id: ""
	I1217 02:06:20.738376 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.738384 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:20.738396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:20.738455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:20.763275 1498704 cri.go:89] found id: ""
	I1217 02:06:20.763300 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.763309 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:20.763316 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:20.763377 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:20.787303 1498704 cri.go:89] found id: ""
	I1217 02:06:20.787328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.787337 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:20.787343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:20.787402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:20.812203 1498704 cri.go:89] found id: ""
	I1217 02:06:20.812230 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.812238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:20.812244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:20.812304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:20.836788 1498704 cri.go:89] found id: ""
	I1217 02:06:20.836814 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.836823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:20.836831 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:20.836842 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:20.901301 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:20.901324 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:20.901337 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:20.927207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:20.927244 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:20.955351 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:20.955377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:21.010892 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:21.010928 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.526340 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:23.536950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:23.537021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:23.561240 1498704 cri.go:89] found id: ""
	I1217 02:06:23.561267 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.561276 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:23.561282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:23.561340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:23.586385 1498704 cri.go:89] found id: ""
	I1217 02:06:23.586407 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.586415 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:23.586421 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:23.586479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:23.610820 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:06:23.612177 1498704 cri.go:89] found id: ""
	I1217 02:06:23.612201 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.612210 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:23.612216 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:23.612270 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1217 02:06:23.698147 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698227 1498704 retry.go:31] will retry after 35.769421328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698299 1498704 cri.go:89] found id: ""
	I1217 02:06:23.698328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.698348 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:23.698379 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:23.698473 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:23.730479 1498704 cri.go:89] found id: ""
	I1217 02:06:23.730555 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.730569 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:23.730577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:23.730656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:23.757694 1498704 cri.go:89] found id: ""
	I1217 02:06:23.757717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.757726 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:23.757732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:23.757802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:23.787070 1498704 cri.go:89] found id: ""
	I1217 02:06:23.787145 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.787162 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:23.787170 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:23.787231 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:23.815895 1498704 cri.go:89] found id: ""
	I1217 02:06:23.815928 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.815937 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:23.815947 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:23.815977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:23.845530 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:23.845558 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:23.904348 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:23.904385 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.919409 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:23.919438 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:23.986183 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:23.986246 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:23.986266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:26.512910 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:26.523572 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:26.523644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:26.549045 1498704 cri.go:89] found id: ""
	I1217 02:06:26.549077 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.549087 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:26.549100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:26.549181 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:26.573386 1498704 cri.go:89] found id: ""
	I1217 02:06:26.573409 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.573417 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:26.573423 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:26.573485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:26.597629 1498704 cri.go:89] found id: ""
	I1217 02:06:26.597673 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.597688 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:26.597695 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:26.597755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:26.625905 1498704 cri.go:89] found id: ""
	I1217 02:06:26.625933 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.625942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:26.625949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:26.626016 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:26.663442 1498704 cri.go:89] found id: ""
	I1217 02:06:26.663466 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.663475 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:26.663482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:26.663565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:26.692315 1498704 cri.go:89] found id: ""
	I1217 02:06:26.692342 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.692351 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:26.692362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:26.692422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:26.718259 1498704 cri.go:89] found id: ""
	I1217 02:06:26.718287 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.718296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:26.718303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:26.718361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:26.743360 1498704 cri.go:89] found id: ""
	I1217 02:06:26.743383 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.743391 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:26.743400 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:26.743412 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:26.770132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:26.770158 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:26.829657 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:26.829749 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:26.845511 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:26.845538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:26.912984 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:26.913004 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:26.913017 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.440066 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:29.450548 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:29.450621 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:29.474768 1498704 cri.go:89] found id: ""
	I1217 02:06:29.474800 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.474809 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:29.474816 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:29.474886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:29.498947 1498704 cri.go:89] found id: ""
	I1217 02:06:29.498969 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.498977 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:29.498983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:29.499041 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:29.523540 1498704 cri.go:89] found id: ""
	I1217 02:06:29.523564 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.523573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:29.523579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:29.523643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:29.556044 1498704 cri.go:89] found id: ""
	I1217 02:06:29.556069 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.556078 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:29.556084 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:29.556144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:29.581373 1498704 cri.go:89] found id: ""
	I1217 02:06:29.581399 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.581408 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:29.581414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:29.581485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:29.607453 1498704 cri.go:89] found id: ""
	I1217 02:06:29.607479 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.607489 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:29.607495 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:29.607576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:29.639841 1498704 cri.go:89] found id: ""
	I1217 02:06:29.639865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.639875 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:29.639881 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:29.639938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:29.670608 1498704 cri.go:89] found id: ""
	I1217 02:06:29.670635 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.670643 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:29.670653 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:29.670665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:29.728148 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:29.728181 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:29.743004 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:29.743029 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:29.815740 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:29.815762 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:29.815775 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.842206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:29.842243 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:32.370825 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:32.383399 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:32.383490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:32.416122 1498704 cri.go:89] found id: ""
	I1217 02:06:32.416148 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.416157 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:32.416164 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:32.416235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:32.450068 1498704 cri.go:89] found id: ""
	I1217 02:06:32.450092 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.450101 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:32.450107 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:32.450176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:32.475101 1498704 cri.go:89] found id: ""
	I1217 02:06:32.475126 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.475135 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:32.475142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:32.475218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:32.500347 1498704 cri.go:89] found id: ""
	I1217 02:06:32.500372 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.500380 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:32.500387 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:32.500447 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:32.525315 1498704 cri.go:89] found id: ""
	I1217 02:06:32.525346 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.525355 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:32.525361 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:32.525440 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:32.550267 1498704 cri.go:89] found id: ""
	I1217 02:06:32.550341 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.550358 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:32.550365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:32.550424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:32.575413 1498704 cri.go:89] found id: ""
	I1217 02:06:32.575438 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.575447 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:32.575453 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:32.575559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:32.603477 1498704 cri.go:89] found id: ""
	I1217 02:06:32.603503 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.603513 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:32.603523 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:32.603568 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:32.669699 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:32.669735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:32.686097 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:32.686126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:32.755583 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:32.755604 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:32.755616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:32.782146 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:32.782195 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:33.698737 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:33.767478 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:33.767516 1498704 retry.go:31] will retry after 19.401613005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.214860 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:35.276710 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.276741 1498704 retry.go:31] will retry after 25.686831054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.310030 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:35.320395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:35.320472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:35.344503 1498704 cri.go:89] found id: ""
	I1217 02:06:35.344525 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.344533 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:35.344539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:35.344597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:35.375750 1498704 cri.go:89] found id: ""
	I1217 02:06:35.375773 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.375782 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:35.375788 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:35.375857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:35.403776 1498704 cri.go:89] found id: ""
	I1217 02:06:35.403803 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.403813 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:35.403819 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:35.403878 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:35.437584 1498704 cri.go:89] found id: ""
	I1217 02:06:35.437608 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.437616 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:35.437623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:35.437723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:35.467173 1498704 cri.go:89] found id: ""
	I1217 02:06:35.467207 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.467216 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:35.467223 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:35.467289 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:35.491257 1498704 cri.go:89] found id: ""
	I1217 02:06:35.491284 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.491294 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:35.491301 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:35.491380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:35.515935 1498704 cri.go:89] found id: ""
	I1217 02:06:35.515961 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.515971 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:35.515978 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:35.516077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:35.542706 1498704 cri.go:89] found id: ""
	I1217 02:06:35.542730 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.542739 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:35.542748 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:35.542759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:35.601383 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:35.601428 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:35.616228 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:35.616269 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:35.693548 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:35.693569 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:35.693584 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:35.719247 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:35.719286 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:38.250028 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:38.261967 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:38.262037 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:38.286400 1498704 cri.go:89] found id: ""
	I1217 02:06:38.286423 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.286431 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:38.286437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:38.286499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:38.310618 1498704 cri.go:89] found id: ""
	I1217 02:06:38.310639 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.310647 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:38.310654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:38.310713 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:38.335110 1498704 cri.go:89] found id: ""
	I1217 02:06:38.335136 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.335144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:38.335151 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:38.335214 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:38.364179 1498704 cri.go:89] found id: ""
	I1217 02:06:38.364202 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.364211 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:38.364218 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:38.364278 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:38.402338 1498704 cri.go:89] found id: ""
	I1217 02:06:38.402366 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.402374 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:38.402384 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:38.402443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:38.433053 1498704 cri.go:89] found id: ""
	I1217 02:06:38.433081 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.433090 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:38.433096 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:38.433155 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:38.461635 1498704 cri.go:89] found id: ""
	I1217 02:06:38.461688 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.461698 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:38.461704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:38.461767 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:38.486774 1498704 cri.go:89] found id: ""
	I1217 02:06:38.486798 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.486807 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:38.486816 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:38.486827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:38.543417 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:38.543453 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:38.558472 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:38.558499 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:38.627234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:38.627308 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:38.627336 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:38.656399 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:38.656481 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:41.188669 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:41.199463 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:41.199550 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:41.223737 1498704 cri.go:89] found id: ""
	I1217 02:06:41.223762 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.223771 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:41.223778 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:41.223842 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:41.248972 1498704 cri.go:89] found id: ""
	I1217 02:06:41.248998 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.249014 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:41.249022 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:41.249084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:41.274840 1498704 cri.go:89] found id: ""
	I1217 02:06:41.274873 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.274886 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:41.274892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:41.274965 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:41.302162 1498704 cri.go:89] found id: ""
	I1217 02:06:41.302188 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.302197 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:41.302204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:41.302274 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:41.331745 1498704 cri.go:89] found id: ""
	I1217 02:06:41.331771 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.331780 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:41.331786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:41.331872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:41.366507 1498704 cri.go:89] found id: ""
	I1217 02:06:41.366538 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.366559 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:41.366567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:41.366642 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:41.402343 1498704 cri.go:89] found id: ""
	I1217 02:06:41.402390 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.402400 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:41.402409 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:41.402482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:41.442142 1498704 cri.go:89] found id: ""
	I1217 02:06:41.442169 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.442177 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:41.442187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:41.442198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:41.498349 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:41.498432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:41.514261 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:41.514287 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:41.577450 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:41.577470 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:41.577483 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:41.602731 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:41.602766 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.138863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:44.149308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:44.149424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:44.175006 1498704 cri.go:89] found id: ""
	I1217 02:06:44.175031 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.175040 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:44.175047 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:44.175103 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:44.199571 1498704 cri.go:89] found id: ""
	I1217 02:06:44.199596 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.199605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:44.199612 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:44.199669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:44.227289 1498704 cri.go:89] found id: ""
	I1217 02:06:44.227313 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.227323 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:44.227329 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:44.227418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:44.255509 1498704 cri.go:89] found id: ""
	I1217 02:06:44.255549 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.255558 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:44.255564 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:44.255622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:44.282827 1498704 cri.go:89] found id: ""
	I1217 02:06:44.282850 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.282858 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:44.282864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:44.282971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:44.310331 1498704 cri.go:89] found id: ""
	I1217 02:06:44.310354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.310363 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:44.310370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:44.310427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:44.334927 1498704 cri.go:89] found id: ""
	I1217 02:06:44.334952 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.334961 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:44.334968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:44.335068 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:44.359119 1498704 cri.go:89] found id: ""
	I1217 02:06:44.359144 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.359153 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:44.359162 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:44.359192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:44.436966 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:44.436987 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:44.437000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:44.462649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:44.462686 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.492091 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:44.492120 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:44.548670 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:44.548707 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.063448 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:47.073962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:47.074076 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:47.100530 1498704 cri.go:89] found id: ""
	I1217 02:06:47.100565 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.100574 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:47.100580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:47.100656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:47.126541 1498704 cri.go:89] found id: ""
	I1217 02:06:47.126573 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.126582 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:47.126589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:47.126657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:47.155783 1498704 cri.go:89] found id: ""
	I1217 02:06:47.155807 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.155816 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:47.155822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:47.155887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:47.183519 1498704 cri.go:89] found id: ""
	I1217 02:06:47.183547 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.183556 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:47.183562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:47.183640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:47.207004 1498704 cri.go:89] found id: ""
	I1217 02:06:47.207029 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.207038 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:47.207044 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:47.207107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:47.236132 1498704 cri.go:89] found id: ""
	I1217 02:06:47.236157 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.236166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:47.236173 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:47.236237 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:47.262428 1498704 cri.go:89] found id: ""
	I1217 02:06:47.262452 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.262460 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:47.262470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:47.262526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:47.291039 1498704 cri.go:89] found id: ""
	I1217 02:06:47.291113 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.291127 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:47.291137 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:47.291154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:47.348423 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:47.348457 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.362973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:47.363001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:47.446529 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:47.446602 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:47.446619 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:47.471848 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:47.471885 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.002430 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:50.016670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:50.016759 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:50.048092 1498704 cri.go:89] found id: ""
	I1217 02:06:50.048116 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.048126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:50.048132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:50.048193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:50.077981 1498704 cri.go:89] found id: ""
	I1217 02:06:50.078006 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.078016 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:50.078023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:50.078084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:50.104799 1498704 cri.go:89] found id: ""
	I1217 02:06:50.104824 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.104833 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:50.104839 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:50.104899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:50.134987 1498704 cri.go:89] found id: ""
	I1217 02:06:50.135010 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.135019 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:50.135025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:50.135088 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:50.163663 1498704 cri.go:89] found id: ""
	I1217 02:06:50.163689 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.163698 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:50.163704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:50.163771 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:50.189331 1498704 cri.go:89] found id: ""
	I1217 02:06:50.189354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.189362 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:50.189369 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:50.189435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:50.214491 1498704 cri.go:89] found id: ""
	I1217 02:06:50.214516 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.214525 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:50.214531 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:50.214590 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:50.238415 1498704 cri.go:89] found id: ""
	I1217 02:06:50.238442 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.238451 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:50.238460 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:50.238472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.269776 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:50.269804 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:50.327018 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:50.327055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:50.341848 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:50.341876 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:50.424429 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:50.424452 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:50.424466 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:52.954006 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:52.964727 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:52.964802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:52.989789 1498704 cri.go:89] found id: ""
	I1217 02:06:52.989810 1498704 logs.go:282] 0 containers: []
	W1217 02:06:52.989819 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:52.989826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:52.989887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:53.015439 1498704 cri.go:89] found id: ""
	I1217 02:06:53.015467 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.015476 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:53.015482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:53.015592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:53.040841 1498704 cri.go:89] found id: ""
	I1217 02:06:53.040865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.040875 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:53.040882 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:53.040942 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:53.066349 1498704 cri.go:89] found id: ""
	I1217 02:06:53.066374 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.066383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:53.066389 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:53.066451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:53.091390 1498704 cri.go:89] found id: ""
	I1217 02:06:53.091415 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.091424 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:53.091430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:53.091490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:53.117556 1498704 cri.go:89] found id: ""
	I1217 02:06:53.117581 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.117590 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:53.117597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:53.117683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:53.142385 1498704 cri.go:89] found id: ""
	I1217 02:06:53.142411 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.142421 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:53.142428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:53.142487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:53.167326 1498704 cri.go:89] found id: ""
	I1217 02:06:53.167351 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.167360 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:53.167370 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:53.167410 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:53.169580 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:06:53.227048 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:53.227133 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:53.263335 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:53.263474 1498704 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:06:53.263485 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:53.263548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:53.331925 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:53.331956 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:53.331970 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:53.358423 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:53.358461 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:55.889770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:55.902670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:55.902755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:55.931695 1498704 cri.go:89] found id: ""
	I1217 02:06:55.931717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.931726 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:55.931732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:55.931792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:55.957876 1498704 cri.go:89] found id: ""
	I1217 02:06:55.957898 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.957906 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:55.957913 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:55.957971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:55.985470 1498704 cri.go:89] found id: ""
	I1217 02:06:55.985494 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.985503 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:55.985510 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:55.985569 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:56.012853 1498704 cri.go:89] found id: ""
	I1217 02:06:56.012876 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.012885 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:56.012892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:56.012953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:56.038869 1498704 cri.go:89] found id: ""
	I1217 02:06:56.038896 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.038906 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:56.038912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:56.038974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:56.063896 1498704 cri.go:89] found id: ""
	I1217 02:06:56.063922 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.063931 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:56.063938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:56.063998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:56.094167 1498704 cri.go:89] found id: ""
	I1217 02:06:56.094194 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.094202 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:56.094209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:56.094317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:56.119180 1498704 cri.go:89] found id: ""
	I1217 02:06:56.119203 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.119211 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:56.119220 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:56.119233 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:56.145717 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:56.145755 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:56.174733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:56.174764 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:56.231996 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:56.232031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:56.246270 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:56.246298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:56.310523 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:58.810773 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:58.820984 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:58.821052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:58.844690 1498704 cri.go:89] found id: ""
	I1217 02:06:58.844713 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.844723 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:58.844729 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:58.844789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:58.869040 1498704 cri.go:89] found id: ""
	I1217 02:06:58.869065 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.869074 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:58.869081 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:58.869141 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:58.897937 1498704 cri.go:89] found id: ""
	I1217 02:06:58.897965 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.897974 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:58.897981 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:58.898046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:58.936181 1498704 cri.go:89] found id: ""
	I1217 02:06:58.936206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.936216 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:58.936222 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:58.936284 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:58.961870 1498704 cri.go:89] found id: ""
	I1217 02:06:58.961894 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.961902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:58.961908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:58.961973 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:58.987453 1498704 cri.go:89] found id: ""
	I1217 02:06:58.987476 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.987485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:58.987492 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:58.987589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:59.014256 1498704 cri.go:89] found id: ""
	I1217 02:06:59.014281 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.014290 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:59.014296 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:59.014356 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:59.043181 1498704 cri.go:89] found id: ""
	I1217 02:06:59.043206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.043214 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:59.043224 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:59.043265 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:59.069988 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:59.070014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:59.126583 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:59.126616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:59.143769 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:59.143858 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:59.206336 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:59.206357 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:59.206368 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:59.467894 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:59.526704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.526801 1498704 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:00.964501 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:01.024877 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:01.024990 1498704 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:01.030055 1498704 out.go:179] * Enabled addons: 
	I1217 02:07:01.032983 1498704 addons.go:530] duration metric: took 1m40.577449503s for enable addons: enabled=[]
	I1217 02:07:01.732628 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:01.743041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:01.743116 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:01.767462 1498704 cri.go:89] found id: ""
	I1217 02:07:01.767488 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.767497 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:01.767503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:01.767602 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:01.793082 1498704 cri.go:89] found id: ""
	I1217 02:07:01.793104 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.793112 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:01.793119 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:01.793179 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:01.819716 1498704 cri.go:89] found id: ""
	I1217 02:07:01.819740 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.819749 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:01.819755 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:01.819815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:01.847485 1498704 cri.go:89] found id: ""
	I1217 02:07:01.847556 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.847572 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:01.847580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:01.847641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:01.875985 1498704 cri.go:89] found id: ""
	I1217 02:07:01.876062 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.876084 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:01.876103 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:01.876193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:01.910714 1498704 cri.go:89] found id: ""
	I1217 02:07:01.910739 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.910748 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:01.910754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:01.910813 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:01.937846 1498704 cri.go:89] found id: ""
	I1217 02:07:01.937871 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.937880 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:01.937886 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:01.937945 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:01.964067 1498704 cri.go:89] found id: ""
	I1217 02:07:01.964091 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.964100 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:01.964114 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:01.964126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:02.028700 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:02.028724 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:02.028739 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:02.054141 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:02.054180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:02.082544 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:02.082570 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:02.139516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:02.139555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.654404 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:04.665750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:04.665823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:04.692548 1498704 cri.go:89] found id: ""
	I1217 02:07:04.692573 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.692582 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:04.692589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:04.692649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:04.716945 1498704 cri.go:89] found id: ""
	I1217 02:07:04.716971 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.716980 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:04.716986 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:04.717050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:04.741853 1498704 cri.go:89] found id: ""
	I1217 02:07:04.741919 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.741943 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:04.741956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:04.742029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:04.766368 1498704 cri.go:89] found id: ""
	I1217 02:07:04.766432 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.766456 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:04.766471 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:04.766543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:04.791787 1498704 cri.go:89] found id: ""
	I1217 02:07:04.791811 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.791819 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:04.791826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:04.791886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:04.817229 1498704 cri.go:89] found id: ""
	I1217 02:07:04.817255 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.817264 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:04.817271 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:04.817343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:04.841915 1498704 cri.go:89] found id: ""
	I1217 02:07:04.841938 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.841947 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:04.841953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:04.842013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:04.866862 1498704 cri.go:89] found id: ""
	I1217 02:07:04.866889 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.866898 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:04.866908 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:04.866920 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:04.930507 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:04.930554 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.948025 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:04.948060 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:05.019651 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:05.019675 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:05.019688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:05.046001 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:05.046036 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.578495 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:07.591153 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:07.591225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:07.621427 1498704 cri.go:89] found id: ""
	I1217 02:07:07.621450 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.621459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:07.621466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:07.621526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:07.661892 1498704 cri.go:89] found id: ""
	I1217 02:07:07.661915 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.661923 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:07.661929 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:07.661995 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:07.695665 1498704 cri.go:89] found id: ""
	I1217 02:07:07.695693 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.695703 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:07.695709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:07.695775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:07.721278 1498704 cri.go:89] found id: ""
	I1217 02:07:07.721308 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.721316 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:07.721323 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:07.721381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:07.745368 1498704 cri.go:89] found id: ""
	I1217 02:07:07.745396 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.745404 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:07.745411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:07.745469 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:07.773994 1498704 cri.go:89] found id: ""
	I1217 02:07:07.774017 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.774025 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:07.774032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:07.774094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:07.799025 1498704 cri.go:89] found id: ""
	I1217 02:07:07.799049 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.799058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:07.799070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:07.799128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:07.824235 1498704 cri.go:89] found id: ""
	I1217 02:07:07.824261 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.824270 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:07.824278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:07.824290 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:07.839101 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:07.839129 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:07.923334 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:07.923360 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:07.923372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:07.949715 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:07.949754 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.977665 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:07.977690 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.537062 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:10.547797 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:10.547872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:10.572434 1498704 cri.go:89] found id: ""
	I1217 02:07:10.572462 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.572472 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:10.572479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:10.572560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:10.597486 1498704 cri.go:89] found id: ""
	I1217 02:07:10.597510 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.597519 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:10.597525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:10.597591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:10.627205 1498704 cri.go:89] found id: ""
	I1217 02:07:10.627227 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.627236 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:10.627241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:10.627316 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:10.661788 1498704 cri.go:89] found id: ""
	I1217 02:07:10.661815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.661825 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:10.661832 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:10.661892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:10.694378 1498704 cri.go:89] found id: ""
	I1217 02:07:10.694403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.694411 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:10.694417 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:10.694481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:10.719732 1498704 cri.go:89] found id: ""
	I1217 02:07:10.719759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.719768 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:10.719775 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:10.719834 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:10.746071 1498704 cri.go:89] found id: ""
	I1217 02:07:10.746141 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.746169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:10.746181 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:10.746257 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:10.771251 1498704 cri.go:89] found id: ""
	I1217 02:07:10.771324 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.771339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:10.771349 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:10.771363 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:10.797277 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:10.797316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:10.824227 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:10.824255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.883648 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:10.883685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:10.899500 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:10.899545 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:10.971848 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.472155 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:13.482654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:13.482730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:13.511840 1498704 cri.go:89] found id: ""
	I1217 02:07:13.511865 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.511874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:13.511880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:13.511938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:13.539314 1498704 cri.go:89] found id: ""
	I1217 02:07:13.539340 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.539349 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:13.539355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:13.539418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:13.564523 1498704 cri.go:89] found id: ""
	I1217 02:07:13.564595 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.564616 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:13.564635 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:13.564722 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:13.588672 1498704 cri.go:89] found id: ""
	I1217 02:07:13.588696 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.588705 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:13.588711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:13.588769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:13.613292 1498704 cri.go:89] found id: ""
	I1217 02:07:13.613370 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.613394 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:13.613413 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:13.613497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:13.640379 1498704 cri.go:89] found id: ""
	I1217 02:07:13.640401 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.640467 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:13.640475 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:13.640596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:13.670823 1498704 cri.go:89] found id: ""
	I1217 02:07:13.670897 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.670909 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:13.670915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:13.671033 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:13.697928 1498704 cri.go:89] found id: ""
	I1217 02:07:13.697954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.697963 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:13.697973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:13.697991 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:13.764081 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.764103 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:13.764117 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:13.789698 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:13.789735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:13.817458 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:13.817528 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:13.873570 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:13.873604 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.390490 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:16.400824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:16.400892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:16.433284 1498704 cri.go:89] found id: ""
	I1217 02:07:16.433306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.433315 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:16.433321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:16.433382 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:16.459029 1498704 cri.go:89] found id: ""
	I1217 02:07:16.459051 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.459059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:16.459065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:16.459123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:16.482532 1498704 cri.go:89] found id: ""
	I1217 02:07:16.482559 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.482568 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:16.482574 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:16.482635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:16.508099 1498704 cri.go:89] found id: ""
	I1217 02:07:16.508126 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.508135 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:16.508141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:16.508198 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:16.537293 1498704 cri.go:89] found id: ""
	I1217 02:07:16.537327 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.537336 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:16.537343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:16.537422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:16.561736 1498704 cri.go:89] found id: ""
	I1217 02:07:16.561761 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.561769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:16.561776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:16.561841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:16.588020 1498704 cri.go:89] found id: ""
	I1217 02:07:16.588054 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.588063 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:16.588069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:16.588136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:16.614951 1498704 cri.go:89] found id: ""
	I1217 02:07:16.614983 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.614993 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:16.615018 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:16.615035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:16.674706 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:16.674738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.693871 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:16.694008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:16.761779 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:16.761800 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:16.761813 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:16.788228 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:16.788270 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:19.320399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:19.330773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:19.330845 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:19.354921 1498704 cri.go:89] found id: ""
	I1217 02:07:19.354990 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.355015 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:19.355028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:19.355100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:19.383572 1498704 cri.go:89] found id: ""
	I1217 02:07:19.383648 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.383662 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:19.383670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:19.383735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:19.412179 1498704 cri.go:89] found id: ""
	I1217 02:07:19.412204 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.412213 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:19.412229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:19.412290 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:19.437924 1498704 cri.go:89] found id: ""
	I1217 02:07:19.437950 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.437959 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:19.437966 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:19.438057 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:19.462416 1498704 cri.go:89] found id: ""
	I1217 02:07:19.462483 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.462507 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:19.462528 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:19.462618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:19.486955 1498704 cri.go:89] found id: ""
	I1217 02:07:19.487022 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.487047 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:19.487061 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:19.487133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:19.517143 1498704 cri.go:89] found id: ""
	I1217 02:07:19.517170 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.517178 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:19.517185 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:19.517245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:19.541419 1498704 cri.go:89] found id: ""
	I1217 02:07:19.541443 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.541452 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:19.541462 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:19.541474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:19.600586 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:19.600621 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:19.615645 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:19.615673 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:19.700496 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:19.700518 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:19.700531 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:19.725860 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:19.725896 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:22.254753 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:22.266831 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:22.266902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:22.291227 1498704 cri.go:89] found id: ""
	I1217 02:07:22.291306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.291329 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:22.291344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:22.291421 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:22.317812 1498704 cri.go:89] found id: ""
	I1217 02:07:22.317835 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.317844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:22.317850 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:22.317929 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:22.341950 1498704 cri.go:89] found id: ""
	I1217 02:07:22.341973 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.341982 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:22.341991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:22.342074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:22.368217 1498704 cri.go:89] found id: ""
	I1217 02:07:22.368291 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.368330 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:22.368350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:22.368435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:22.396888 1498704 cri.go:89] found id: ""
	I1217 02:07:22.396911 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.396920 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:22.396926 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:22.396987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:22.420964 1498704 cri.go:89] found id: ""
	I1217 02:07:22.421040 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.421064 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:22.421083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:22.421163 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:22.446890 1498704 cri.go:89] found id: ""
	I1217 02:07:22.446954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.446980 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:22.447002 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:22.447067 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:22.475922 1498704 cri.go:89] found id: ""
	I1217 02:07:22.475949 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.475959 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:22.475968 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:22.475980 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:22.532457 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:22.532490 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:22.546823 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:22.546900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:22.612059 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:22.612089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:22.612102 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:22.642268 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:22.642325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.182933 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:25.194033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:25.194115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:25.218403 1498704 cri.go:89] found id: ""
	I1217 02:07:25.218426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.218434 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:25.218441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:25.218500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:25.247233 1498704 cri.go:89] found id: ""
	I1217 02:07:25.247257 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.247267 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:25.247272 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:25.247337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:25.271255 1498704 cri.go:89] found id: ""
	I1217 02:07:25.271278 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.271286 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:25.271292 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:25.271354 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:25.295129 1498704 cri.go:89] found id: ""
	I1217 02:07:25.295152 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.295161 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:25.295167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:25.295232 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:25.323735 1498704 cri.go:89] found id: ""
	I1217 02:07:25.323802 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.323818 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:25.323826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:25.323895 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:25.348083 1498704 cri.go:89] found id: ""
	I1217 02:07:25.348107 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.348116 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:25.348123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:25.348187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:25.375945 1498704 cri.go:89] found id: ""
	I1217 02:07:25.375967 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.375976 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:25.375982 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:25.376046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:25.404167 1498704 cri.go:89] found id: ""
	I1217 02:07:25.404190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.404199 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:25.404207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:25.404219 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.432830 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:25.432905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:25.491437 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:25.491472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:25.506773 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:25.506811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:25.571857 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:25.571879 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:25.571891 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.097148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:28.109420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:28.109492 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:28.147274 1498704 cri.go:89] found id: ""
	I1217 02:07:28.147301 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.147310 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:28.147317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:28.147375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:28.182487 1498704 cri.go:89] found id: ""
	I1217 02:07:28.182520 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.182529 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:28.182535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:28.182605 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:28.210414 1498704 cri.go:89] found id: ""
	I1217 02:07:28.210492 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.210506 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:28.210513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:28.210596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:28.236032 1498704 cri.go:89] found id: ""
	I1217 02:07:28.236067 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.236076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:28.236100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:28.236187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:28.261848 1498704 cri.go:89] found id: ""
	I1217 02:07:28.261925 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.261949 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:28.261961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:28.262023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:28.287575 1498704 cri.go:89] found id: ""
	I1217 02:07:28.287642 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.287667 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:28.287681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:28.287753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:28.311909 1498704 cri.go:89] found id: ""
	I1217 02:07:28.311942 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.311950 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:28.311974 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:28.312055 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:28.338978 1498704 cri.go:89] found id: ""
	I1217 02:07:28.338999 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.339013 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:28.339041 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:28.339059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:28.395245 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:28.395283 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:28.410155 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:28.410183 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:28.473762 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:28.473783 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:28.473807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.499695 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:28.499728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.034443 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:31.045062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:31.045138 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:31.071798 1498704 cri.go:89] found id: ""
	I1217 02:07:31.071825 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.071835 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:31.071842 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:31.071912 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:31.102760 1498704 cri.go:89] found id: ""
	I1217 02:07:31.102787 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.102795 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:31.102802 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:31.102866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:31.141278 1498704 cri.go:89] found id: ""
	I1217 02:07:31.141303 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.141313 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:31.141320 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:31.141385 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:31.171560 1498704 cri.go:89] found id: ""
	I1217 02:07:31.171590 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.171599 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:31.171606 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:31.171671 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:31.198647 1498704 cri.go:89] found id: ""
	I1217 02:07:31.198713 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.198736 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:31.198749 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:31.198822 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:31.223451 1498704 cri.go:89] found id: ""
	I1217 02:07:31.223534 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.223560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:31.223580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:31.223660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:31.253387 1498704 cri.go:89] found id: ""
	I1217 02:07:31.253413 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.253422 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:31.253428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:31.253487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:31.278792 1498704 cri.go:89] found id: ""
	I1217 02:07:31.278815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.278823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:31.278832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:31.278843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:31.303758 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:31.303790 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.332180 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:31.332251 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:31.388186 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:31.388222 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:31.402632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:31.402661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:31.464007 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:33.964236 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:33.974724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:33.974801 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:33.997812 1498704 cri.go:89] found id: ""
	I1217 02:07:33.997833 1498704 logs.go:282] 0 containers: []
	W1217 02:07:33.997841 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:33.997847 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:33.997918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:34.028229 1498704 cri.go:89] found id: ""
	I1217 02:07:34.028256 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.028265 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:34.028273 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:34.028333 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:34.053400 1498704 cri.go:89] found id: ""
	I1217 02:07:34.053426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.053437 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:34.053444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:34.053504 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:34.079351 1498704 cri.go:89] found id: ""
	I1217 02:07:34.079419 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.079433 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:34.079441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:34.079499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:34.106192 1498704 cri.go:89] found id: ""
	I1217 02:07:34.106228 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.106237 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:34.106244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:34.106315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:34.147697 1498704 cri.go:89] found id: ""
	I1217 02:07:34.147759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.147785 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:34.147810 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:34.147890 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:34.176177 1498704 cri.go:89] found id: ""
	I1217 02:07:34.176244 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.176268 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:34.176288 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:34.176365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:34.205945 1498704 cri.go:89] found id: ""
	I1217 02:07:34.206007 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.206035 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:34.206056 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:34.206081 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:34.262276 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:34.262309 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:34.276944 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:34.276971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:34.338908 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:34.338934 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:34.338947 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:34.363617 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:34.363647 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:36.891296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:36.902860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:36.902927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:36.930707 1498704 cri.go:89] found id: ""
	I1217 02:07:36.930733 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.930747 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:36.930754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:36.930811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:36.955573 1498704 cri.go:89] found id: ""
	I1217 02:07:36.955597 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.955605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:36.955611 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:36.955668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:36.980409 1498704 cri.go:89] found id: ""
	I1217 02:07:36.980434 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.980444 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:36.980450 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:36.980508 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:37.009442 1498704 cri.go:89] found id: ""
	I1217 02:07:37.009467 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.009477 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:37.009484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:37.009551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:37.037149 1498704 cri.go:89] found id: ""
	I1217 02:07:37.037171 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.037180 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:37.037186 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:37.037250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:37.061767 1498704 cri.go:89] found id: ""
	I1217 02:07:37.061792 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.061801 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:37.061818 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:37.061889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:37.085968 1498704 cri.go:89] found id: ""
	I1217 02:07:37.085993 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.086003 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:37.086009 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:37.086074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:37.115273 1498704 cri.go:89] found id: ""
	I1217 02:07:37.115295 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.115303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:37.115312 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:37.115323 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:37.173190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:37.173223 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:37.190802 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:37.190834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:37.258464 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:37.258486 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:37.258498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:37.283631 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:37.283665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:39.816914 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:39.827386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:39.827463 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:39.852104 1498704 cri.go:89] found id: ""
	I1217 02:07:39.852129 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.852139 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:39.852145 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:39.852204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:39.892785 1498704 cri.go:89] found id: ""
	I1217 02:07:39.892806 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.892815 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:39.892822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:39.892887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:39.923500 1498704 cri.go:89] found id: ""
	I1217 02:07:39.923530 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.923538 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:39.923544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:39.923603 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:39.949968 1498704 cri.go:89] found id: ""
	I1217 02:07:39.949995 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.950004 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:39.950010 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:39.950071 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:39.974479 1498704 cri.go:89] found id: ""
	I1217 02:07:39.974500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.974508 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:39.974515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:39.974572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:40.015259 1498704 cri.go:89] found id: ""
	I1217 02:07:40.015286 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.015296 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:40.015303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:40.015375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:40.045029 1498704 cri.go:89] found id: ""
	I1217 02:07:40.045055 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.045064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:40.045071 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:40.045135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:40.072784 1498704 cri.go:89] found id: ""
	I1217 02:07:40.072818 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.072833 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:40.072843 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:40.072860 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:40.153737 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:40.153765 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:40.153780 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:40.189498 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:40.189552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:40.222768 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:40.222844 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:40.279190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:40.279224 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:42.796231 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:42.806670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:42.806738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:42.830230 1498704 cri.go:89] found id: ""
	I1217 02:07:42.830250 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.830258 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:42.830265 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:42.830323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:42.855478 1498704 cri.go:89] found id: ""
	I1217 02:07:42.855500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.855509 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:42.855515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:42.855580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:42.894494 1498704 cri.go:89] found id: ""
	I1217 02:07:42.894522 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.894530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:42.894536 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:42.894593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:42.921324 1498704 cri.go:89] found id: ""
	I1217 02:07:42.921350 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.921359 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:42.921365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:42.921435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:42.953266 1498704 cri.go:89] found id: ""
	I1217 02:07:42.953290 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.953299 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:42.953305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:42.953366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:42.977816 1498704 cri.go:89] found id: ""
	I1217 02:07:42.977841 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.977850 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:42.977856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:42.977917 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:43.003747 1498704 cri.go:89] found id: ""
	I1217 02:07:43.003839 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.003865 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:43.003880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:43.003963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:43.029772 1498704 cri.go:89] found id: ""
	I1217 02:07:43.029797 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.029806 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:43.029816 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:43.029828 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:43.055443 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:43.055476 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:43.084076 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:43.084104 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:43.145546 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:43.145607 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:43.161920 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:43.161999 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:43.231831 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:45.733506 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:45.744340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:45.744408 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:45.769934 1498704 cri.go:89] found id: ""
	I1217 02:07:45.769957 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.769965 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:45.769971 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:45.770034 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:45.795238 1498704 cri.go:89] found id: ""
	I1217 02:07:45.795263 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.795272 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:45.795279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:45.795343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:45.821898 1498704 cri.go:89] found id: ""
	I1217 02:07:45.821922 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.821930 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:45.821937 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:45.821999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:45.847109 1498704 cri.go:89] found id: ""
	I1217 02:07:45.847132 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.847140 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:45.847146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:45.847208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:45.880160 1498704 cri.go:89] found id: ""
	I1217 02:07:45.880190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.880199 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:45.880205 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:45.880271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:45.910818 1498704 cri.go:89] found id: ""
	I1217 02:07:45.910850 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.910859 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:45.910866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:45.910927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:45.939378 1498704 cri.go:89] found id: ""
	I1217 02:07:45.939403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.939413 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:45.939419 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:45.939480 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:45.966395 1498704 cri.go:89] found id: ""
	I1217 02:07:45.966421 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.966430 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:45.966440 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:45.966479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:45.981177 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:45.981203 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:46.055154 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:46.055186 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:46.055204 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:46.081781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:46.081822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:46.110247 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:46.110271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:48.673749 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:48.684117 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:48.684190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:48.710141 1498704 cri.go:89] found id: ""
	I1217 02:07:48.710163 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.710171 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:48.710177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:48.710242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:48.735609 1498704 cri.go:89] found id: ""
	I1217 02:07:48.735631 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.735639 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:48.735648 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:48.735707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:48.760494 1498704 cri.go:89] found id: ""
	I1217 02:07:48.760517 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.760525 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:48.760532 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:48.760592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:48.786553 1498704 cri.go:89] found id: ""
	I1217 02:07:48.786574 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.786582 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:48.786588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:48.786645 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:48.815529 1498704 cri.go:89] found id: ""
	I1217 02:07:48.815551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.815560 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:48.815566 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:48.815623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:48.839528 1498704 cri.go:89] found id: ""
	I1217 02:07:48.839551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.839560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:48.839567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:48.839649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:48.870240 1498704 cri.go:89] found id: ""
	I1217 02:07:48.870266 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.870275 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:48.870282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:48.870363 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:48.906712 1498704 cri.go:89] found id: ""
	I1217 02:07:48.906736 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.906746 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:48.906756 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:48.906786 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:48.934786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:48.934865 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:48.964758 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:48.964785 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:49.022291 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:49.022326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:49.036990 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:49.037025 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:49.101921 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.602715 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.614088 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:51.614167 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:51.640614 1498704 cri.go:89] found id: ""
	I1217 02:07:51.640639 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.640648 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:51.640655 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:51.640716 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:51.665595 1498704 cri.go:89] found id: ""
	I1217 02:07:51.665622 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.665631 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:51.665637 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:51.665727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:51.690508 1498704 cri.go:89] found id: ""
	I1217 02:07:51.690532 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.690541 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:51.690547 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:51.690627 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:51.717537 1498704 cri.go:89] found id: ""
	I1217 02:07:51.717561 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.717570 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:51.717577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:51.717638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:51.742073 1498704 cri.go:89] found id: ""
	I1217 02:07:51.742095 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.742104 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:51.742110 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:51.742169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:51.768165 1498704 cri.go:89] found id: ""
	I1217 02:07:51.768188 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.768234 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:51.768255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:51.768322 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:51.793095 1498704 cri.go:89] found id: ""
	I1217 02:07:51.793118 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.793127 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:51.793133 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:51.793195 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:51.817679 1498704 cri.go:89] found id: ""
	I1217 02:07:51.817701 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.817710 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:51.817720 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:51.817730 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:51.874453 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:51.874486 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:51.890393 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:51.890418 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:51.966182 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.966201 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:51.966214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:51.992382 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:51.992417 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.525060 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:54.535685 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:54.535760 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:54.563912 1498704 cri.go:89] found id: ""
	I1217 02:07:54.563935 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.563944 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:54.563950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:54.564011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:54.588995 1498704 cri.go:89] found id: ""
	I1217 02:07:54.589020 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.589031 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:54.589038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:54.589101 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:54.615173 1498704 cri.go:89] found id: ""
	I1217 02:07:54.615198 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.615207 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:54.615214 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:54.615277 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:54.640498 1498704 cri.go:89] found id: ""
	I1217 02:07:54.640523 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.640532 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:54.640539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:54.640623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:54.666201 1498704 cri.go:89] found id: ""
	I1217 02:07:54.666226 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.666234 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:54.666241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:54.666303 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:54.690876 1498704 cri.go:89] found id: ""
	I1217 02:07:54.690899 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.690908 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:54.690915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:54.690974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:54.714932 1498704 cri.go:89] found id: ""
	I1217 02:07:54.715000 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.715024 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:54.715043 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:54.715133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:54.739880 1498704 cri.go:89] found id: ""
	I1217 02:07:54.739906 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.739926 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:54.739952 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:54.739978 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:54.804035 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:54.804056 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:54.804070 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:54.829994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:54.830030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.858611 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:54.858639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:54.921120 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:54.921196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.438546 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.448669 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:57.448736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:57.475324 1498704 cri.go:89] found id: ""
	I1217 02:07:57.475346 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.475355 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:57.475362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:57.475419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:57.505098 1498704 cri.go:89] found id: ""
	I1217 02:07:57.505123 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.505131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:57.505137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:57.505196 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:57.529496 1498704 cri.go:89] found id: ""
	I1217 02:07:57.529519 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.529529 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:57.529535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:57.529601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:57.560154 1498704 cri.go:89] found id: ""
	I1217 02:07:57.560179 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.560188 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:57.560194 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:57.560256 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:57.584872 1498704 cri.go:89] found id: ""
	I1217 02:07:57.584898 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.584912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:57.584919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:57.584976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:57.611897 1498704 cri.go:89] found id: ""
	I1217 02:07:57.611930 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.611938 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:57.611945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:57.612004 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:57.636969 1498704 cri.go:89] found id: ""
	I1217 02:07:57.636991 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.636999 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:57.637006 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:57.637069 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:57.661285 1498704 cri.go:89] found id: ""
	I1217 02:07:57.661312 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.661320 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:57.661329 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:57.661340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:57.717030 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:57.717066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.732556 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:57.732588 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:57.802383 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:57.802403 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:57.802414 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:57.831640 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:57.831729 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.359786 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.375104 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:00.375194 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:00.418191 1498704 cri.go:89] found id: ""
	I1217 02:08:00.418222 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.418232 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:00.418239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:00.418315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:00.456739 1498704 cri.go:89] found id: ""
	I1217 02:08:00.456766 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.456775 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:00.456782 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:00.456850 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:00.488069 1498704 cri.go:89] found id: ""
	I1217 02:08:00.488097 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.488106 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:00.488115 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:00.488180 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:00.522338 1498704 cri.go:89] found id: ""
	I1217 02:08:00.522369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.522383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:00.522391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:00.522477 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:00.552999 1498704 cri.go:89] found id: ""
	I1217 02:08:00.553026 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.553035 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:00.553041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:00.553105 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:00.579678 1498704 cri.go:89] found id: ""
	I1217 02:08:00.579710 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.579719 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:00.579725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:00.579787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:00.605680 1498704 cri.go:89] found id: ""
	I1217 02:08:00.605708 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.605717 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:00.605724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:00.605787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:00.632147 1498704 cri.go:89] found id: ""
	I1217 02:08:00.632172 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.632181 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:00.632191 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:00.632202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:00.658405 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:00.658442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.687017 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:00.687042 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:00.743960 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:00.743997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:00.758928 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:00.758957 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:00.826075 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:03.326352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.337106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:03.337176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:03.362079 1498704 cri.go:89] found id: ""
	I1217 02:08:03.362103 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.362112 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:03.362120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:03.362185 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:03.406055 1498704 cri.go:89] found id: ""
	I1217 02:08:03.406078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.406086 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:03.406092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:03.406153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:03.469689 1498704 cri.go:89] found id: ""
	I1217 02:08:03.469719 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.469728 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:03.469734 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:03.469795 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:03.495363 1498704 cri.go:89] found id: ""
	I1217 02:08:03.495388 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.495397 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:03.495403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:03.495462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:03.520987 1498704 cri.go:89] found id: ""
	I1217 02:08:03.521020 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.521029 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:03.521035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:03.521104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:03.546993 1498704 cri.go:89] found id: ""
	I1217 02:08:03.547070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.547086 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:03.547094 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:03.547157 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:03.572356 1498704 cri.go:89] found id: ""
	I1217 02:08:03.572381 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.572390 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:03.572396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:03.572465 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:03.601007 1498704 cri.go:89] found id: ""
	I1217 02:08:03.601039 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.601048 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:03.601058 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:03.601069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:03.626163 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:03.626198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:03.653854 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:03.653882 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:03.711530 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:03.711566 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:03.726308 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:03.726377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:03.794467 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.296166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.306860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:06.306931 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:06.335081 1498704 cri.go:89] found id: ""
	I1217 02:08:06.335118 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.335128 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:06.335140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:06.335216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:06.360315 1498704 cri.go:89] found id: ""
	I1217 02:08:06.360337 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.360346 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:06.360353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:06.360416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:06.438162 1498704 cri.go:89] found id: ""
	I1217 02:08:06.438184 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.438193 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:06.438201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:06.438260 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:06.473712 1498704 cri.go:89] found id: ""
	I1217 02:08:06.473739 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.473750 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:06.473757 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:06.473821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:06.501185 1498704 cri.go:89] found id: ""
	I1217 02:08:06.501213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.501223 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:06.501229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:06.501291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:06.527618 1498704 cri.go:89] found id: ""
	I1217 02:08:06.527642 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.527650 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:06.527657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:06.527723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:06.551855 1498704 cri.go:89] found id: ""
	I1217 02:08:06.551882 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.551892 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:06.551899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:06.551982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:06.577516 1498704 cri.go:89] found id: ""
	I1217 02:08:06.577547 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.577556 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:06.577566 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:06.577577 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:06.592728 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:06.592762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:06.660537 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.660559 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:06.660572 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:06.685272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:06.685307 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:06.716733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:06.716761 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.274376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.285055 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:09.285129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:09.310445 1498704 cri.go:89] found id: ""
	I1217 02:08:09.310468 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.310477 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:09.310483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:09.310551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:09.339399 1498704 cri.go:89] found id: ""
	I1217 02:08:09.339434 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.339443 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:09.339449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:09.339539 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:09.364792 1498704 cri.go:89] found id: ""
	I1217 02:08:09.364830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.364843 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:09.364851 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:09.364921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:09.398786 1498704 cri.go:89] found id: ""
	I1217 02:08:09.398813 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.398822 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:09.398829 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:09.398898 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:09.437605 1498704 cri.go:89] found id: ""
	I1217 02:08:09.437633 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.437670 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:09.437696 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:09.437778 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:09.469389 1498704 cri.go:89] found id: ""
	I1217 02:08:09.469430 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.469439 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:09.469446 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:09.469557 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:09.501822 1498704 cri.go:89] found id: ""
	I1217 02:08:09.501847 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.501856 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:09.501873 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:09.501953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:09.526536 1498704 cri.go:89] found id: ""
	I1217 02:08:09.526604 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.526627 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:09.526649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:09.526685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:09.553800 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:09.553829 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.611333 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:09.611367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:09.626057 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:09.626083 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:09.690274 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:09.690296 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:09.690308 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.216656 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.226983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:12.227094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:12.251590 1498704 cri.go:89] found id: ""
	I1217 02:08:12.251613 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.251622 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:12.251628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:12.251686 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:12.276257 1498704 cri.go:89] found id: ""
	I1217 02:08:12.276285 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.276293 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:12.276308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:12.276365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:12.300603 1498704 cri.go:89] found id: ""
	I1217 02:08:12.300628 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.300637 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:12.300643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:12.300704 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:12.328528 1498704 cri.go:89] found id: ""
	I1217 02:08:12.328552 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.328561 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:12.328571 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:12.328629 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:12.353931 1498704 cri.go:89] found id: ""
	I1217 02:08:12.353954 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.353963 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:12.353969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:12.354031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:12.426173 1498704 cri.go:89] found id: ""
	I1217 02:08:12.426238 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.426263 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:12.426283 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:12.426375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:12.463406 1498704 cri.go:89] found id: ""
	I1217 02:08:12.463432 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.463441 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:12.463447 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:12.463511 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:12.491432 1498704 cri.go:89] found id: ""
	I1217 02:08:12.491457 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.491466 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:12.491476 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:12.491487 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:12.549942 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:12.549979 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:12.566124 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:12.566160 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:12.632809 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:12.632878 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:12.632899 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.657969 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:12.658007 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:15.189789 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.200614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:15.200684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:15.224844 1498704 cri.go:89] found id: ""
	I1217 02:08:15.224865 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.224874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:15.224880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:15.224939 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:15.253351 1498704 cri.go:89] found id: ""
	I1217 02:08:15.253417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.253441 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:15.253459 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:15.253547 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:15.278140 1498704 cri.go:89] found id: ""
	I1217 02:08:15.278216 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.278238 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:15.278257 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:15.278335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:15.303296 1498704 cri.go:89] found id: ""
	I1217 02:08:15.303325 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.303334 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:15.303340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:15.303399 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:15.332342 1498704 cri.go:89] found id: ""
	I1217 02:08:15.332369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.332379 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:15.332386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:15.332442 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:15.361393 1498704 cri.go:89] found id: ""
	I1217 02:08:15.361417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.361426 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:15.361432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:15.361501 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:15.399309 1498704 cri.go:89] found id: ""
	I1217 02:08:15.399335 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.399343 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:15.399350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:15.399409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:15.441743 1498704 cri.go:89] found id: ""
	I1217 02:08:15.441769 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.441778 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:15.441787 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:15.441799 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:15.508941 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:15.508977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:15.524099 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:15.524127 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:15.595333 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:15.595351 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:15.595367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:15.620921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:15.620958 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:18.151199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.162135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:18.162207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:18.190085 1498704 cri.go:89] found id: ""
	I1217 02:08:18.190108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.190116 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:18.190123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:18.190186 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:18.218906 1498704 cri.go:89] found id: ""
	I1217 02:08:18.218930 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.218938 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:18.218944 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:18.219002 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:18.242454 1498704 cri.go:89] found id: ""
	I1217 02:08:18.242476 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.242484 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:18.242490 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:18.242549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:18.267483 1498704 cri.go:89] found id: ""
	I1217 02:08:18.267505 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.267514 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:18.267527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:18.267587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:18.291870 1498704 cri.go:89] found id: ""
	I1217 02:08:18.291894 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.291902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:18.291909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:18.291970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:18.315514 1498704 cri.go:89] found id: ""
	I1217 02:08:18.315543 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.315551 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:18.315558 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:18.315617 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:18.338958 1498704 cri.go:89] found id: ""
	I1217 02:08:18.338980 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.338988 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:18.338995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:18.339052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:18.362300 1498704 cri.go:89] found id: ""
	I1217 02:08:18.362326 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.362339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:18.362349 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:18.362361 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:18.441796 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:18.441881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:18.465294 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:18.465318 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:18.527976 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:18.527999 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:18.528012 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:18.552941 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:18.552971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:21.080554 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.090872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:21.090951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:21.119427 1498704 cri.go:89] found id: ""
	I1217 02:08:21.119451 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.119459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:21.119466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:21.119531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:21.145488 1498704 cri.go:89] found id: ""
	I1217 02:08:21.145509 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.145517 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:21.145524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:21.145589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:21.171795 1498704 cri.go:89] found id: ""
	I1217 02:08:21.171822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.171830 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:21.171837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:21.171897 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:21.200041 1498704 cri.go:89] found id: ""
	I1217 02:08:21.200067 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.200076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:21.200083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:21.200144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:21.224266 1498704 cri.go:89] found id: ""
	I1217 02:08:21.224294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.224302 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:21.224310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:21.224374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:21.249832 1498704 cri.go:89] found id: ""
	I1217 02:08:21.249859 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.249868 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:21.249875 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:21.249934 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:21.276533 1498704 cri.go:89] found id: ""
	I1217 02:08:21.276556 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.276565 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:21.276577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:21.276638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:21.302869 1498704 cri.go:89] found id: ""
	I1217 02:08:21.302898 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.302906 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:21.302920 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:21.302932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:21.359571 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:21.359612 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:21.386971 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:21.387000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:21.481485 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:21.481511 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:21.481523 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:21.510229 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:21.510266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.042457 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.053742 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:24.053815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:24.079751 1498704 cri.go:89] found id: ""
	I1217 02:08:24.079777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.079793 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:24.079801 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:24.079863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:24.106268 1498704 cri.go:89] found id: ""
	I1217 02:08:24.106294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.106304 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:24.106310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:24.106372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:24.136105 1498704 cri.go:89] found id: ""
	I1217 02:08:24.136127 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.136141 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:24.136147 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:24.136208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:24.162676 1498704 cri.go:89] found id: ""
	I1217 02:08:24.162704 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.162713 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:24.162719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:24.162781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:24.186881 1498704 cri.go:89] found id: ""
	I1217 02:08:24.186909 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.186918 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:24.186924 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:24.186983 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:24.211784 1498704 cri.go:89] found id: ""
	I1217 02:08:24.211807 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.211816 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:24.211823 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:24.211883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:24.239768 1498704 cri.go:89] found id: ""
	I1217 02:08:24.239791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.239799 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:24.239806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:24.239863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:24.267746 1498704 cri.go:89] found id: ""
	I1217 02:08:24.267826 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.267843 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:24.267853 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:24.267864 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:24.292626 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:24.292661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.324726 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:24.324756 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:24.386142 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:24.386184 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:24.417577 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:24.417605 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:24.496974 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:26.997267 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.015470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:27.015561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:27.041572 1498704 cri.go:89] found id: ""
	I1217 02:08:27.041593 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.041601 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:27.041608 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:27.041697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:27.067860 1498704 cri.go:89] found id: ""
	I1217 02:08:27.067884 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.067902 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:27.067923 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:27.068020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:27.091698 1498704 cri.go:89] found id: ""
	I1217 02:08:27.091722 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.091737 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:27.091744 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:27.091804 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:27.116923 1498704 cri.go:89] found id: ""
	I1217 02:08:27.116946 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.116954 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:27.116961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:27.117020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:27.142595 1498704 cri.go:89] found id: ""
	I1217 02:08:27.142619 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.142628 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:27.142634 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:27.142693 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:27.167169 1498704 cri.go:89] found id: ""
	I1217 02:08:27.167195 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.167204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:27.167211 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:27.167271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:27.191350 1498704 cri.go:89] found id: ""
	I1217 02:08:27.191376 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.191384 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:27.191391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:27.191451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:27.216388 1498704 cri.go:89] found id: ""
	I1217 02:08:27.216413 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.216422 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:27.216431 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:27.216442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:27.279861 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:27.279884 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:27.279900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:27.304990 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:27.305027 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:27.333926 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:27.333952 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:27.396365 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:27.396403 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:29.913629 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.924284 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:29.924359 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:29.951846 1498704 cri.go:89] found id: ""
	I1217 02:08:29.951873 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.951882 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:29.951888 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:29.951948 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:29.979680 1498704 cri.go:89] found id: ""
	I1217 02:08:29.979709 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.979718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:29.979724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:29.979783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:30.017361 1498704 cri.go:89] found id: ""
	I1217 02:08:30.017494 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.017508 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:30.017517 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:30.017600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:30.055966 1498704 cri.go:89] found id: ""
	I1217 02:08:30.055994 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.056008 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:30.056015 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:30.056153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:30.086268 1498704 cri.go:89] found id: ""
	I1217 02:08:30.086296 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.086305 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:30.086313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:30.086387 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:30.114436 1498704 cri.go:89] found id: ""
	I1217 02:08:30.114474 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.114485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:30.114493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:30.114563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:30.143104 1498704 cri.go:89] found id: ""
	I1217 02:08:30.143130 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.143140 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:30.143148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:30.143215 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:30.178848 1498704 cri.go:89] found id: ""
	I1217 02:08:30.178912 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.178928 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:30.178939 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:30.178950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:30.235226 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:30.235261 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:30.250400 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:30.250427 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:30.316823 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:30.316843 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:30.316855 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:30.341943 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:30.341985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:32.880177 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.891005 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:32.891073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:32.918870 1498704 cri.go:89] found id: ""
	I1217 02:08:32.918896 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.918905 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:32.918912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:32.918970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:32.944098 1498704 cri.go:89] found id: ""
	I1217 02:08:32.944123 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.944132 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:32.944137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:32.944197 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:32.968767 1498704 cri.go:89] found id: ""
	I1217 02:08:32.968791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.968801 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:32.968806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:32.968864 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:32.992596 1498704 cri.go:89] found id: ""
	I1217 02:08:32.992624 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.992632 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:32.992638 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:32.992702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:33.018400 1498704 cri.go:89] found id: ""
	I1217 02:08:33.018424 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.018433 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:33.018439 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:33.018497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:33.043622 1498704 cri.go:89] found id: ""
	I1217 02:08:33.043650 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.043660 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:33.043666 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:33.043728 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:33.068595 1498704 cri.go:89] found id: ""
	I1217 02:08:33.068617 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.068627 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:33.068633 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:33.068695 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:33.097084 1498704 cri.go:89] found id: ""
	I1217 02:08:33.097108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.097117 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:33.097126 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:33.097137 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:33.122964 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:33.123001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:33.151132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:33.151159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:33.206768 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:33.206805 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:33.221251 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:33.221330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:33.289516 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:35.789806 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.800262 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:35.800330 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:35.824823 1498704 cri.go:89] found id: ""
	I1217 02:08:35.824844 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.824852 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:35.824859 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:35.824916 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:35.849352 1498704 cri.go:89] found id: ""
	I1217 02:08:35.849379 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.849388 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:35.849395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:35.849455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:35.873025 1498704 cri.go:89] found id: ""
	I1217 02:08:35.873045 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.873054 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:35.873060 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:35.873123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:35.897548 1498704 cri.go:89] found id: ""
	I1217 02:08:35.897572 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.897581 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:35.897586 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:35.897660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:35.927220 1498704 cri.go:89] found id: ""
	I1217 02:08:35.927283 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.927301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:35.927309 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:35.927374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:35.955050 1498704 cri.go:89] found id: ""
	I1217 02:08:35.955075 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.955083 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:35.955089 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:35.955168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:35.979074 1498704 cri.go:89] found id: ""
	I1217 02:08:35.979144 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.979160 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:35.979167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:35.979228 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:36.005502 1498704 cri.go:89] found id: ""
	I1217 02:08:36.005529 1498704 logs.go:282] 0 containers: []
	W1217 02:08:36.005557 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:36.005568 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:36.005582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:36.022508 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:36.022536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:36.088117 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:36.088139 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:36.088152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:36.112883 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:36.112917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:36.142584 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:36.142610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.698261 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:38.709807 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:38.709880 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:38.734678 1498704 cri.go:89] found id: ""
	I1217 02:08:38.734703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.734712 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:38.734718 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:38.734777 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:38.764118 1498704 cri.go:89] found id: ""
	I1217 02:08:38.764145 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.764154 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:38.764161 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:38.764223 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:38.792269 1498704 cri.go:89] found id: ""
	I1217 02:08:38.792295 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.792305 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:38.792311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:38.792371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:38.817823 1498704 cri.go:89] found id: ""
	I1217 02:08:38.817845 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.817854 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:38.817861 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:38.817921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:38.846444 1498704 cri.go:89] found id: ""
	I1217 02:08:38.846469 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.846478 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:38.846484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:38.846575 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:38.870805 1498704 cri.go:89] found id: ""
	I1217 02:08:38.870830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.870839 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:38.870845 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:38.870909 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:38.902022 1498704 cri.go:89] found id: ""
	I1217 02:08:38.902047 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.902056 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:38.902063 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:38.902127 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:38.925802 1498704 cri.go:89] found id: ""
	I1217 02:08:38.925831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.925851 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:38.925860 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:38.925871 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.991113 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:38.991154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:39.006019 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:39.006049 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:39.074269 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:39.074328 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:39.074342 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:39.099793 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:39.099827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:41.629026 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.643330 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:41.643411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:41.702722 1498704 cri.go:89] found id: ""
	I1217 02:08:41.702743 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.702752 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:41.702758 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:41.702817 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:41.727343 1498704 cri.go:89] found id: ""
	I1217 02:08:41.727368 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.727377 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:41.727383 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:41.727443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:41.752306 1498704 cri.go:89] found id: ""
	I1217 02:08:41.752331 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.752340 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:41.752346 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:41.752409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:41.777003 1498704 cri.go:89] found id: ""
	I1217 02:08:41.777078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.777101 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:41.777121 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:41.777225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:41.801272 1498704 cri.go:89] found id: ""
	I1217 02:08:41.801298 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.801306 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:41.801313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:41.801371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:41.827046 1498704 cri.go:89] found id: ""
	I1217 02:08:41.827070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.827078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:41.827085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:41.827142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:41.855924 1498704 cri.go:89] found id: ""
	I1217 02:08:41.855956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.855965 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:41.855972 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:41.856042 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:41.882797 1498704 cri.go:89] found id: ""
	I1217 02:08:41.882821 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.882830 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:41.882840 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:41.882856 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:41.897281 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:41.897316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:41.963310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:41.963333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:41.963344 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:41.988494 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:41.988529 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:42.019738 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:42.019770 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.578521 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.589302 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:44.589376 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:44.614651 1498704 cri.go:89] found id: ""
	I1217 02:08:44.614676 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.614685 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:44.614692 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:44.614755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:44.666392 1498704 cri.go:89] found id: ""
	I1217 02:08:44.666414 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.666422 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:44.666429 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:44.666487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:44.722566 1498704 cri.go:89] found id: ""
	I1217 02:08:44.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.722599 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:44.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:44.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:44.747631 1498704 cri.go:89] found id: ""
	I1217 02:08:44.747656 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.747665 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:44.747671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:44.747730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:44.775719 1498704 cri.go:89] found id: ""
	I1217 02:08:44.775756 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.775765 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:44.775773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:44.775846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:44.801032 1498704 cri.go:89] found id: ""
	I1217 02:08:44.801056 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.801066 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:44.801072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:44.801131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:44.827838 1498704 cri.go:89] found id: ""
	I1217 02:08:44.827872 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.827883 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:44.827890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:44.827961 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:44.852948 1498704 cri.go:89] found id: ""
	I1217 02:08:44.852981 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.852990 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:44.853000 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:44.853011 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.908280 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:44.908314 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:44.923445 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:44.923538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:44.992600 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:44.992624 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:44.992637 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:45.027924 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:45.027975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:47.587759 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.598591 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:47.598664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:47.660378 1498704 cri.go:89] found id: ""
	I1217 02:08:47.660400 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.660408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:47.660414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:47.660472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:47.708467 1498704 cri.go:89] found id: ""
	I1217 02:08:47.708489 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.708498 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:47.708504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:47.708563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:47.733161 1498704 cri.go:89] found id: ""
	I1217 02:08:47.733183 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.733191 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:47.733198 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:47.733264 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:47.759190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.759213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.759222 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:47.759228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:47.759285 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:47.787579 1498704 cri.go:89] found id: ""
	I1217 02:08:47.787601 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.787610 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:47.787616 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:47.787697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:47.816190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.816215 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.816224 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:47.816231 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:47.816323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:47.843534 1498704 cri.go:89] found id: ""
	I1217 02:08:47.843562 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.843572 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:47.843578 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:47.843643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:47.867806 1498704 cri.go:89] found id: ""
	I1217 02:08:47.867831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.867841 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:47.867852 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:47.867870 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:47.926619 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:47.926658 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:47.941706 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:47.941734 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:48.009461 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:48.009539 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:48.009561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:48.035273 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:48.035311 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.567421 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.578623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:50.578694 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:50.607374 1498704 cri.go:89] found id: ""
	I1217 02:08:50.607396 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.607405 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.607411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:50.607472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:50.666455 1498704 cri.go:89] found id: ""
	I1217 02:08:50.666484 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.666493 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.666499 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:50.666559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:50.717784 1498704 cri.go:89] found id: ""
	I1217 02:08:50.717822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.717831 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.717838 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:50.717941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:50.748500 1498704 cri.go:89] found id: ""
	I1217 02:08:50.748531 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.748543 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.748550 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:50.748618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:50.774642 1498704 cri.go:89] found id: ""
	I1217 02:08:50.774668 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.774677 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.774683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:50.774742 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:50.803738 1498704 cri.go:89] found id: ""
	I1217 02:08:50.803760 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.803769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.803776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:50.803840 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:50.828145 1498704 cri.go:89] found id: ""
	I1217 02:08:50.828212 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.828238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.828256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:50.828335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:50.853950 1498704 cri.go:89] found id: ""
	I1217 02:08:50.853976 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.853985 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.853995 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.854006 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.910278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.910316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.924980 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.925008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.992234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.992257 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:50.992271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:51.018744 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:51.018778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.547953 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.558518 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:53.558593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:53.583100 1498704 cri.go:89] found id: ""
	I1217 02:08:53.583125 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.583134 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.583141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:53.583202 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:53.607925 1498704 cri.go:89] found id: ""
	I1217 02:08:53.607948 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.607956 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.607962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:53.608023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:53.657081 1498704 cri.go:89] found id: ""
	I1217 02:08:53.657104 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.657127 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.657135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:53.657208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:53.704278 1498704 cri.go:89] found id: ""
	I1217 02:08:53.704305 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.704313 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.704321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:53.704381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:53.730823 1498704 cri.go:89] found id: ""
	I1217 02:08:53.730851 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.730860 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.730868 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:53.730928 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:53.757094 1498704 cri.go:89] found id: ""
	I1217 02:08:53.757116 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.757125 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.757132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:53.757192 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:53.786671 1498704 cri.go:89] found id: ""
	I1217 02:08:53.786696 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.786705 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.786711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:53.786768 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:53.810935 1498704 cri.go:89] found id: ""
	I1217 02:08:53.810957 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.810966 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.810975 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.810986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.866107 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.866140 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:53.881003 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.881037 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.945396 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.945419 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:53.945432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:53.973428 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.973469 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:56.504673 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.515738 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:56.515816 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:56.540741 1498704 cri.go:89] found id: ""
	I1217 02:08:56.540765 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.540773 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.540780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:56.540846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:56.565810 1498704 cri.go:89] found id: ""
	I1217 02:08:56.565831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.565840 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.565846 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:56.565907 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:56.596074 1498704 cri.go:89] found id: ""
	I1217 02:08:56.596096 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.596105 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.596112 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:56.596173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:56.636207 1498704 cri.go:89] found id: ""
	I1217 02:08:56.636229 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.636238 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.636244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:56.636304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:56.698720 1498704 cri.go:89] found id: ""
	I1217 02:08:56.698749 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.698758 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.698765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:56.698838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:56.732897 1498704 cri.go:89] found id: ""
	I1217 02:08:56.732918 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.732926 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.732933 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:56.732999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:56.762677 1498704 cri.go:89] found id: ""
	I1217 02:08:56.762703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.762712 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.762719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:56.762779 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:56.788307 1498704 cri.go:89] found id: ""
	I1217 02:08:56.788333 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.788342 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.788352 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.788364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.844513 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.844548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.858936 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.858968 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.925270 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.925293 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:56.925305 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:56.951928 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.951967 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.483487 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.494825 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:59.494899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:59.520751 1498704 cri.go:89] found id: ""
	I1217 02:08:59.520777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.520785 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.520792 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:59.520851 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:59.546097 1498704 cri.go:89] found id: ""
	I1217 02:08:59.546122 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.546131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.546138 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:59.546205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:59.571525 1498704 cri.go:89] found id: ""
	I1217 02:08:59.571548 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.571556 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.571562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:59.571635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:59.595916 1498704 cri.go:89] found id: ""
	I1217 02:08:59.595944 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.595952 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.595959 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:59.596021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:59.677470 1498704 cri.go:89] found id: ""
	I1217 02:08:59.677497 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.677506 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.677512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:59.677577 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:59.708285 1498704 cri.go:89] found id: ""
	I1217 02:08:59.708311 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.708320 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.708328 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:59.708388 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:59.735444 1498704 cri.go:89] found id: ""
	I1217 02:08:59.735466 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.735474 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.735481 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:59.735551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:59.758934 1498704 cri.go:89] found id: ""
	I1217 02:08:59.758956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.758964 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.758974 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.758985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.786487 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.786513 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.843688 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.843719 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.858632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.858661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.922844 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:59.922867 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:59.922888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.448942 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.459473 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:02.459570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:02.487463 1498704 cri.go:89] found id: ""
	I1217 02:09:02.487486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.487494 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.487529 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:02.487591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:02.516013 1498704 cri.go:89] found id: ""
	I1217 02:09:02.516038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.516047 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.516053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:02.516118 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:02.541783 1498704 cri.go:89] found id: ""
	I1217 02:09:02.541806 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.541814 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.541820 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:02.541876 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:02.566427 1498704 cri.go:89] found id: ""
	I1217 02:09:02.566450 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.566459 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.566465 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:02.566561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:02.590894 1498704 cri.go:89] found id: ""
	I1217 02:09:02.590917 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.590926 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.590932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:02.590998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:02.614645 1498704 cri.go:89] found id: ""
	I1217 02:09:02.614668 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.614677 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.614683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:02.614747 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:02.656626 1498704 cri.go:89] found id: ""
	I1217 02:09:02.656662 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.656671 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.656681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:02.656751 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:02.702753 1498704 cri.go:89] found id: ""
	I1217 02:09:02.702787 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.702796 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.702806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.702817 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.772243 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.772266 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:02.772278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.797608 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.797893 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:02.829032 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.829057 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.886939 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.886975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.401718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.412408 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:05.412488 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:05.441786 1498704 cri.go:89] found id: ""
	I1217 02:09:05.441821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.441830 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.441837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:05.441908 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:05.466385 1498704 cri.go:89] found id: ""
	I1217 02:09:05.466408 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.466416 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.466422 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:05.466481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:05.491033 1498704 cri.go:89] found id: ""
	I1217 02:09:05.491057 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.491066 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.491072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:05.491131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:05.515650 1498704 cri.go:89] found id: ""
	I1217 02:09:05.515675 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.515684 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.515691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:05.515753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:05.539973 1498704 cri.go:89] found id: ""
	I1217 02:09:05.539996 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.540004 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.540016 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:05.540077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:05.565317 1498704 cri.go:89] found id: ""
	I1217 02:09:05.565338 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.565347 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.565353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:05.565414 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:05.590136 1498704 cri.go:89] found id: ""
	I1217 02:09:05.590161 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.590169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.590176 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:05.590240 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:05.614696 1498704 cri.go:89] found id: ""
	I1217 02:09:05.614733 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.614742 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.614752 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.614762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.682980 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.683022 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.700674 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.700704 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.777617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.777635 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:05.777670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:05.803121 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.803155 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:08.332434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.343036 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:08.343108 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:08.367411 1498704 cri.go:89] found id: ""
	I1217 02:09:08.367434 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.367443 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.367449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:08.367517 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:08.391668 1498704 cri.go:89] found id: ""
	I1217 02:09:08.391695 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.391704 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.391712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:08.391775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:08.415929 1498704 cri.go:89] found id: ""
	I1217 02:09:08.415953 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.415961 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.415968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:08.416050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:08.441685 1498704 cri.go:89] found id: ""
	I1217 02:09:08.441755 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.441779 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.441798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:08.441888 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:08.466687 1498704 cri.go:89] found id: ""
	I1217 02:09:08.466713 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.466722 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.466728 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:08.466808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:08.491044 1498704 cri.go:89] found id: ""
	I1217 02:09:08.491069 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.491078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.491085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:08.491190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:08.517483 1498704 cri.go:89] found id: ""
	I1217 02:09:08.517508 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.517517 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.517524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:08.517593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:08.543991 1498704 cri.go:89] found id: ""
	I1217 02:09:08.544017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.544026 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.544035 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.544053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.608510 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.608567 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.642989 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.643026 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.751212 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.751241 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:08.751254 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:08.779142 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.779180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.312760 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.327627 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:11.327714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:11.352557 1498704 cri.go:89] found id: ""
	I1217 02:09:11.352580 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.352588 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.352595 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:11.352654 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:11.378891 1498704 cri.go:89] found id: ""
	I1217 02:09:11.378913 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.378922 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.378928 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:11.378987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:11.403393 1498704 cri.go:89] found id: ""
	I1217 02:09:11.403416 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.403424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.403430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:11.403489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:11.432435 1498704 cri.go:89] found id: ""
	I1217 02:09:11.432459 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.432472 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.432479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:11.432565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:11.458410 1498704 cri.go:89] found id: ""
	I1217 02:09:11.458436 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.458445 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.458451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:11.458510 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:11.484113 1498704 cri.go:89] found id: ""
	I1217 02:09:11.484140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.484149 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.484156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:11.484216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:11.511088 1498704 cri.go:89] found id: ""
	I1217 02:09:11.511112 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.511121 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.511128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:11.511191 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:11.540295 1498704 cri.go:89] found id: ""
	I1217 02:09:11.540324 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.540333 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.540342 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.540354 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.554828 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.554857 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:11.615811 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:11.615835 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:11.615849 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:11.643999 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.644035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.696705 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.696733 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.265939 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.276062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:14.276129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:14.301710 1498704 cri.go:89] found id: ""
	I1217 02:09:14.301736 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.301744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.301753 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:14.301811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:14.327085 1498704 cri.go:89] found id: ""
	I1217 02:09:14.327111 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.327119 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.327125 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:14.327182 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:14.351112 1498704 cri.go:89] found id: ""
	I1217 02:09:14.351134 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.351142 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.351148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:14.351208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:14.379796 1498704 cri.go:89] found id: ""
	I1217 02:09:14.379823 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.379833 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.379840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:14.379902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:14.404135 1498704 cri.go:89] found id: ""
	I1217 02:09:14.404158 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.404167 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.404172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:14.404234 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:14.428171 1498704 cri.go:89] found id: ""
	I1217 02:09:14.428194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.428204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.428212 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:14.428272 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:14.455193 1498704 cri.go:89] found id: ""
	I1217 02:09:14.455217 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.455225 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.455232 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:14.455292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:14.479959 1498704 cri.go:89] found id: ""
	I1217 02:09:14.479985 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.479994 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.480003 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.480014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.537013 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.537048 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:14.551864 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:14.551888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:14.616449 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:14.616522 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:14.616551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:14.646206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:14.646248 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.269774 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.280406 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:17.280478 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:17.305501 1498704 cri.go:89] found id: ""
	I1217 02:09:17.305529 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.305537 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.305544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:17.305601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:17.330336 1498704 cri.go:89] found id: ""
	I1217 02:09:17.330361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.330370 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.330377 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:17.330436 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:17.355210 1498704 cri.go:89] found id: ""
	I1217 02:09:17.355235 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.355250 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.355256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:17.355315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:17.380868 1498704 cri.go:89] found id: ""
	I1217 02:09:17.380893 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.380901 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.380908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:17.380968 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:17.406748 1498704 cri.go:89] found id: ""
	I1217 02:09:17.406771 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.406779 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.406785 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:17.406844 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:17.431237 1498704 cri.go:89] found id: ""
	I1217 02:09:17.431263 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.431272 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.431279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:17.431337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:17.455474 1498704 cri.go:89] found id: ""
	I1217 02:09:17.455500 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.455516 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.455523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:17.455586 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:17.479040 1498704 cri.go:89] found id: ""
	I1217 02:09:17.479062 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.479070 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.479079 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:17.479092 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.511305 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.511333 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:17.567635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:17.567672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:17.583863 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:17.583892 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:17.655165 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:17.655185 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:17.655198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.181833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.192614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:20.192732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:20.219176 1498704 cri.go:89] found id: ""
	I1217 02:09:20.219199 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.219208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.219215 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:20.219275 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:20.248198 1498704 cri.go:89] found id: ""
	I1217 02:09:20.248224 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.248233 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.248239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:20.248299 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:20.273332 1498704 cri.go:89] found id: ""
	I1217 02:09:20.273355 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.273363 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.273370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:20.273429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:20.299548 1498704 cri.go:89] found id: ""
	I1217 02:09:20.299621 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.299655 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.299668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:20.299741 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:20.328882 1498704 cri.go:89] found id: ""
	I1217 02:09:20.328911 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.328919 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.328925 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:20.328987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:20.354861 1498704 cri.go:89] found id: ""
	I1217 02:09:20.354887 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.354898 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.354904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:20.354999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:20.380708 1498704 cri.go:89] found id: ""
	I1217 02:09:20.380744 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.380754 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:20.380761 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:20.380833 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:20.410724 1498704 cri.go:89] found id: ""
	I1217 02:09:20.410749 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.410758 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:20.410767 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:20.410778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:20.470014 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:20.470053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:20.484955 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:20.484989 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:20.548617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:20.548637 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:20.548649 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.573994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:20.574030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:23.106211 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.116663 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:23.116732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:23.144995 1498704 cri.go:89] found id: ""
	I1217 02:09:23.145017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.145025 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.145031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:23.145089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:23.172623 1498704 cri.go:89] found id: ""
	I1217 02:09:23.172651 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.172660 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.172668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:23.172727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:23.201388 1498704 cri.go:89] found id: ""
	I1217 02:09:23.201415 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.201424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.201437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:23.201500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:23.225335 1498704 cri.go:89] found id: ""
	I1217 02:09:23.225361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.225370 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.225376 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:23.225433 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:23.251629 1498704 cri.go:89] found id: ""
	I1217 02:09:23.251654 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.251662 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:23.251668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:23.251733 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:23.279092 1498704 cri.go:89] found id: ""
	I1217 02:09:23.279120 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.279129 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:23.279136 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:23.279199 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:23.303104 1498704 cri.go:89] found id: ""
	I1217 02:09:23.303126 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.303134 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:23.303140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:23.303204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:23.327448 1498704 cri.go:89] found id: ""
	I1217 02:09:23.327479 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.327488 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:23.327497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:23.327544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:23.394139 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:23.394186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:23.409933 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:23.409961 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:23.478459 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:23.478484 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:23.478498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:23.503474 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:23.503515 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:26.036615 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.047567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:26.047682 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:26.072876 1498704 cri.go:89] found id: ""
	I1217 02:09:26.072903 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.072912 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.072919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:26.072981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:26.100352 1498704 cri.go:89] found id: ""
	I1217 02:09:26.100378 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.100387 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:26.100392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:26.100450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:26.135848 1498704 cri.go:89] found id: ""
	I1217 02:09:26.135875 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.135884 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:26.135890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:26.135950 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:26.168993 1498704 cri.go:89] found id: ""
	I1217 02:09:26.169020 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.169028 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:26.169035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:26.169094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:26.210553 1498704 cri.go:89] found id: ""
	I1217 02:09:26.210581 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.210590 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:26.210597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:26.210659 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:26.236497 1498704 cri.go:89] found id: ""
	I1217 02:09:26.236526 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.236534 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:26.236541 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:26.236600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:26.261964 1498704 cri.go:89] found id: ""
	I1217 02:09:26.261989 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.261997 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:26.262004 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:26.262090 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:26.288105 1498704 cri.go:89] found id: ""
	I1217 02:09:26.288138 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.288148 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:26.288157 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:26.288168 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:26.343617 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:26.343650 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:26.358285 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:26.358312 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:26.424304 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:26.424327 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:26.424340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:26.450148 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:26.450185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:28.978571 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:28.990745 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:28.990835 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:29.015938 1498704 cri.go:89] found id: ""
	I1217 02:09:29.015962 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.015971 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:29.015977 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:29.016035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:29.041116 1498704 cri.go:89] found id: ""
	I1217 02:09:29.041141 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.041149 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:29.041156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:29.041217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:29.066014 1498704 cri.go:89] found id: ""
	I1217 02:09:29.066036 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.066044 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:29.066051 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:29.066107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:29.090514 1498704 cri.go:89] found id: ""
	I1217 02:09:29.090539 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.090548 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:29.090554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:29.090640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:29.114384 1498704 cri.go:89] found id: ""
	I1217 02:09:29.114405 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.114414 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:29.114420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:29.114506 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:29.143954 1498704 cri.go:89] found id: ""
	I1217 02:09:29.143977 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.143987 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:29.143995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:29.144081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:29.185816 1498704 cri.go:89] found id: ""
	I1217 02:09:29.185839 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.185847 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:29.185864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:29.185941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:29.214738 1498704 cri.go:89] found id: ""
	I1217 02:09:29.214761 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.214770 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:29.214780 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:29.214807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:29.244598 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:29.244623 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:29.300237 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:29.300271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.314809 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:29.314874 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:29.380612 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:29.380633 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:29.380645 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:31.905779 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:31.917874 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:31.917963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:31.946726 1498704 cri.go:89] found id: ""
	I1217 02:09:31.946750 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.946759 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:31.946766 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:31.946829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:31.971653 1498704 cri.go:89] found id: ""
	I1217 02:09:31.971677 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.971685 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:31.971691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:31.971753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:31.999116 1498704 cri.go:89] found id: ""
	I1217 02:09:31.999139 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.999147 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:31.999160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:31.999224 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:32.028438 1498704 cri.go:89] found id: ""
	I1217 02:09:32.028461 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.028470 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:32.028476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:32.028535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:32.053600 1498704 cri.go:89] found id: ""
	I1217 02:09:32.053623 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.053632 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:32.053639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:32.053734 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:32.080000 1498704 cri.go:89] found id: ""
	I1217 02:09:32.080023 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.080032 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:32.080038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:32.080100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:32.105557 1498704 cri.go:89] found id: ""
	I1217 02:09:32.105632 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.105700 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:32.105721 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:32.105814 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:32.142478 1498704 cri.go:89] found id: ""
	I1217 02:09:32.142506 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.142515 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:32.142524 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:32.142536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:32.158591 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:32.158625 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:32.222822 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:32.222896 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:32.222917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:32.248192 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:32.248226 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:32.275127 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:32.275152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:34.830607 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:34.841178 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:34.841251 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:34.866230 1498704 cri.go:89] found id: ""
	I1217 02:09:34.866254 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.866263 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:34.866270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:34.866347 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:34.895167 1498704 cri.go:89] found id: ""
	I1217 02:09:34.895234 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.895251 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:34.895258 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:34.895317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:34.924481 1498704 cri.go:89] found id: ""
	I1217 02:09:34.924521 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.924530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:34.924537 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:34.924608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:34.953744 1498704 cri.go:89] found id: ""
	I1217 02:09:34.953814 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.953830 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:34.953837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:34.953910 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:34.978668 1498704 cri.go:89] found id: ""
	I1217 02:09:34.978735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.978755 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:34.978763 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:34.978823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:35.010506 1498704 cri.go:89] found id: ""
	I1217 02:09:35.010545 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.010554 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:35.010562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:35.010649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:35.037564 1498704 cri.go:89] found id: ""
	I1217 02:09:35.037591 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.037601 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:35.037607 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:35.037720 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:35.063033 1498704 cri.go:89] found id: ""
	I1217 02:09:35.063072 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.063093 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:35.063107 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:35.063123 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:35.119982 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:35.120059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:35.136426 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:35.136504 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:35.210581 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:35.210605 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:35.210617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:35.235901 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:35.235932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:37.769826 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:37.780267 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:37.780361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:37.804770 1498704 cri.go:89] found id: ""
	I1217 02:09:37.804835 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.804858 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:37.804876 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:37.804947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:37.828942 1498704 cri.go:89] found id: ""
	I1217 02:09:37.828981 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.829006 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:37.829019 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:37.829098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:37.856624 1498704 cri.go:89] found id: ""
	I1217 02:09:37.856689 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.856714 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:37.856733 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:37.856808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:37.895741 1498704 cri.go:89] found id: ""
	I1217 02:09:37.895779 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.895789 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:37.895796 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:37.895870 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:37.928762 1498704 cri.go:89] found id: ""
	I1217 02:09:37.928795 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.928804 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:37.928811 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:37.928889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:37.964505 1498704 cri.go:89] found id: ""
	I1217 02:09:37.964530 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.964540 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:37.964557 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:37.964622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:37.990281 1498704 cri.go:89] found id: ""
	I1217 02:09:37.990306 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.990315 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:37.990321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:37.990409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:38.022757 1498704 cri.go:89] found id: ""
	I1217 02:09:38.022789 1498704 logs.go:282] 0 containers: []
	W1217 02:09:38.022799 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:38.022819 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:38.022839 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:38.082781 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:38.082818 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:38.098274 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:38.098303 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:38.181369 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:38.181394 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:38.181408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:38.211421 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:38.211459 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:40.744187 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:40.755584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:40.755657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:40.784265 1498704 cri.go:89] found id: ""
	I1217 02:09:40.784290 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.784299 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:40.784305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:40.784366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:40.812965 1498704 cri.go:89] found id: ""
	I1217 02:09:40.813034 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.813059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:40.813077 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:40.813170 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:40.838108 1498704 cri.go:89] found id: ""
	I1217 02:09:40.838135 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.838144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:40.838150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:40.838218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:40.863761 1498704 cri.go:89] found id: ""
	I1217 02:09:40.863797 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.863806 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:40.863814 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:40.863883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:40.896946 1498704 cri.go:89] found id: ""
	I1217 02:09:40.896973 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.896982 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:40.896990 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:40.897049 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:40.927040 1498704 cri.go:89] found id: ""
	I1217 02:09:40.927067 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.927076 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:40.927083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:40.927142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:40.953843 1498704 cri.go:89] found id: ""
	I1217 02:09:40.953869 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.953878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:40.953885 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:40.953947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:40.983898 1498704 cri.go:89] found id: ""
	I1217 02:09:40.983921 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.983929 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:40.983938 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:40.983950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:41.041172 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:41.041208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:41.056418 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:41.056454 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:41.119760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:41.119832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:41.119859 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:41.148272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:41.148479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.682654 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:43.694991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:43.695064 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:43.722566 1498704 cri.go:89] found id: ""
	I1217 02:09:43.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.722599 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:43.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:43.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:43.747132 1498704 cri.go:89] found id: ""
	I1217 02:09:43.747157 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.747165 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:43.747177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:43.747238 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:43.773465 1498704 cri.go:89] found id: ""
	I1217 02:09:43.773486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.773494 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:43.773500 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:43.773559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:43.798692 1498704 cri.go:89] found id: ""
	I1217 02:09:43.798716 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.798725 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:43.798731 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:43.798796 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:43.825731 1498704 cri.go:89] found id: ""
	I1217 02:09:43.825753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.825762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:43.825768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:43.825827 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:43.855796 1498704 cri.go:89] found id: ""
	I1217 02:09:43.855821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.855829 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:43.855836 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:43.855902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:43.886935 1498704 cri.go:89] found id: ""
	I1217 02:09:43.886960 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.886969 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:43.886975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:43.887035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:43.917934 1498704 cri.go:89] found id: ""
	I1217 02:09:43.917961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.917970 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:43.917979 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:43.917997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.947632 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:43.947659 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:44.003825 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:44.003866 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:44.019941 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:44.019972 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:44.089358 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:44.089380 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:44.089394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.615402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:46.625887 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:46.625979 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:46.650868 1498704 cri.go:89] found id: ""
	I1217 02:09:46.650891 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.650899 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:46.650906 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:46.650966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:46.675004 1498704 cri.go:89] found id: ""
	I1217 02:09:46.675025 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.675033 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:46.675039 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:46.675098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:46.698859 1498704 cri.go:89] found id: ""
	I1217 02:09:46.698880 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.698888 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:46.698899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:46.698966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:46.722103 1498704 cri.go:89] found id: ""
	I1217 02:09:46.722130 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.722139 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:46.722146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:46.722205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:46.749559 1498704 cri.go:89] found id: ""
	I1217 02:09:46.749582 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.749591 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:46.749598 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:46.749681 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:46.775252 1498704 cri.go:89] found id: ""
	I1217 02:09:46.775274 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.775282 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:46.775289 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:46.775368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:46.799706 1498704 cri.go:89] found id: ""
	I1217 02:09:46.799738 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.799747 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:46.799754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:46.799815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:46.825525 1498704 cri.go:89] found id: ""
	I1217 02:09:46.825552 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.825562 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:46.825596 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:46.825616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:46.898518 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:46.898546 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:46.898559 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.924328 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:46.924360 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:46.953287 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:46.953315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:47.008776 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:47.008811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.524226 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:49.535609 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:49.535691 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:49.563709 1498704 cri.go:89] found id: ""
	I1217 02:09:49.563735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.563744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:49.563751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:49.563829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:49.589205 1498704 cri.go:89] found id: ""
	I1217 02:09:49.589229 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.589238 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:49.589245 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:49.589305 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:49.615016 1498704 cri.go:89] found id: ""
	I1217 02:09:49.615038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.615046 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:49.615053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:49.615110 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:49.639299 1498704 cri.go:89] found id: ""
	I1217 02:09:49.639377 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.639407 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:49.639416 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:49.639514 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:49.664056 1498704 cri.go:89] found id: ""
	I1217 02:09:49.664079 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.664087 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:49.664093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:49.664151 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:49.688630 1498704 cri.go:89] found id: ""
	I1217 02:09:49.688652 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.688661 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:49.688667 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:49.688724 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:49.712428 1498704 cri.go:89] found id: ""
	I1217 02:09:49.712447 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.712461 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:49.712467 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:49.712525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:49.736311 1498704 cri.go:89] found id: ""
	I1217 02:09:49.736388 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.736412 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:49.736433 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:49.736473 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:49.792224 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:49.792264 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.806602 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:49.806639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.873760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.873781 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:49.873793 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:49.901849 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:49.901881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.452856 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:52.463628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:52.463707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:52.487769 1498704 cri.go:89] found id: ""
	I1217 02:09:52.487794 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.487802 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:52.487809 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:52.487901 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:52.515989 1498704 cri.go:89] found id: ""
	I1217 02:09:52.516013 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.516022 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:52.516028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:52.516136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:52.542514 1498704 cri.go:89] found id: ""
	I1217 02:09:52.542538 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.542547 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:52.542554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:52.542622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:52.567016 1498704 cri.go:89] found id: ""
	I1217 02:09:52.567050 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.567059 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:52.567067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:52.567129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:52.591935 1498704 cri.go:89] found id: ""
	I1217 02:09:52.591961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.591969 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:52.591975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:52.592035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:52.617548 1498704 cri.go:89] found id: ""
	I1217 02:09:52.617573 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.617583 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:52.617589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:52.617668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:52.642857 1498704 cri.go:89] found id: ""
	I1217 02:09:52.642881 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.642889 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:52.642895 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:52.642952 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:52.666997 1498704 cri.go:89] found id: ""
	I1217 02:09:52.667022 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.667031 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:52.667042 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:52.667055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.736175 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.736198 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:52.736210 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:52.761310 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.761340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.789730 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:52.789758 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:52.846428 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:52.846464 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.363216 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:55.378169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:55.378242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:55.405237 1498704 cri.go:89] found id: ""
	I1217 02:09:55.405262 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.405271 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:55.405277 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:55.405341 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:55.431829 1498704 cri.go:89] found id: ""
	I1217 02:09:55.431852 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.431860 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:55.431866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:55.431924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:55.464126 1498704 cri.go:89] found id: ""
	I1217 02:09:55.464149 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.464157 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:55.464163 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:55.464221 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:55.489098 1498704 cri.go:89] found id: ""
	I1217 02:09:55.489140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.489174 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:55.489188 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:55.489291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:55.514718 1498704 cri.go:89] found id: ""
	I1217 02:09:55.514753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.514762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:55.514768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:55.514828 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:55.538941 1498704 cri.go:89] found id: ""
	I1217 02:09:55.538964 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.538972 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:55.538979 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:55.539040 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:55.564206 1498704 cri.go:89] found id: ""
	I1217 02:09:55.564233 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.564242 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:55.564248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:55.564307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:55.588698 1498704 cri.go:89] found id: ""
	I1217 02:09:55.588722 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.588731 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:55.588740 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:55.588751 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.643314 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.643346 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.657901 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.657933 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.728753 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.728775 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:55.728788 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:55.754781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.754822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:58.282279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:58.292524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:58.292594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:58.320120 1498704 cri.go:89] found id: ""
	I1217 02:09:58.320144 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.320153 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:58.320160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:58.320219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:58.344609 1498704 cri.go:89] found id: ""
	I1217 02:09:58.344634 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.344643 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:58.344649 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:58.344714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:58.371166 1498704 cri.go:89] found id: ""
	I1217 02:09:58.371194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.371203 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:58.371209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:58.371267 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:58.399919 1498704 cri.go:89] found id: ""
	I1217 02:09:58.399947 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.399955 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:58.399961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:58.400029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:58.426746 1498704 cri.go:89] found id: ""
	I1217 02:09:58.426774 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.426783 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:58.426789 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:58.426849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:58.452086 1498704 cri.go:89] found id: ""
	I1217 02:09:58.452164 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.452187 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:58.452202 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:58.452313 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:58.479597 1498704 cri.go:89] found id: ""
	I1217 02:09:58.479640 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.479650 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:58.479657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:58.479735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:58.507631 1498704 cri.go:89] found id: ""
	I1217 02:09:58.507660 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.507668 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.507677 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.507688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.563330 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.563364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.577956 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.577986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.640599 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.640618 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:58.640631 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:58.665542 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.665579 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.193230 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:01.205093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:01.205168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:01.231574 1498704 cri.go:89] found id: ""
	I1217 02:10:01.231657 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.231671 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:01.231679 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:01.231755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:01.258626 1498704 cri.go:89] found id: ""
	I1217 02:10:01.258656 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.258665 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:01.258671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:01.258731 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:01.285028 1498704 cri.go:89] found id: ""
	I1217 02:10:01.285107 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.285130 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:01.285150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:01.285236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:01.311238 1498704 cri.go:89] found id: ""
	I1217 02:10:01.311260 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.311270 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:01.311276 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:01.311337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:01.335915 1498704 cri.go:89] found id: ""
	I1217 02:10:01.335938 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.335946 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.335953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:01.336013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:01.362270 1498704 cri.go:89] found id: ""
	I1217 02:10:01.362299 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.362310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.362317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:01.362386 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:01.389194 1498704 cri.go:89] found id: ""
	I1217 02:10:01.389272 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.389296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.389315 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:01.389404 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:01.425060 1498704 cri.go:89] found id: ""
	I1217 02:10:01.425133 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.425156 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.425178 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.425214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.484970 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.485005 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.500061 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.500089 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.568584 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.568606 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:01.568618 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:01.594966 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.595000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.124707 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:04.138794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:04.138889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:04.192615 1498704 cri.go:89] found id: ""
	I1217 02:10:04.192646 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.192657 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:04.192664 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:04.192738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:04.223099 1498704 cri.go:89] found id: ""
	I1217 02:10:04.223126 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.223135 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.223142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:04.223204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:04.251428 1498704 cri.go:89] found id: ""
	I1217 02:10:04.251451 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.251460 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.251466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:04.251549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:04.277739 1498704 cri.go:89] found id: ""
	I1217 02:10:04.277767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.277778 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.277786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:04.277849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:04.302600 1498704 cri.go:89] found id: ""
	I1217 02:10:04.302625 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.302633 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.302639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:04.302702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:04.328192 1498704 cri.go:89] found id: ""
	I1217 02:10:04.328221 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.328230 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.328237 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:04.328307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:04.354026 1498704 cri.go:89] found id: ""
	I1217 02:10:04.354049 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.354058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.354064 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:04.354125 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:04.387067 1498704 cri.go:89] found id: ""
	I1217 02:10:04.387101 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.387111 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.387140 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:04.387159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:04.420944 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.420981 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.453477 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.453511 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.509779 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.509814 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.525121 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.525151 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.596992 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.097279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.107872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:07.107951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:07.140845 1498704 cri.go:89] found id: ""
	I1217 02:10:07.140873 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.140883 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.140889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:07.140949 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:07.171271 1498704 cri.go:89] found id: ""
	I1217 02:10:07.171293 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.171301 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.171307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:07.171368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:07.199048 1498704 cri.go:89] found id: ""
	I1217 02:10:07.199075 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.199085 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.199092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:07.199152 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:07.223715 1498704 cri.go:89] found id: ""
	I1217 02:10:07.223755 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.223765 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.223771 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:07.223838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:07.250683 1498704 cri.go:89] found id: ""
	I1217 02:10:07.250708 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.250718 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.250724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:07.250783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:07.274541 1498704 cri.go:89] found id: ""
	I1217 02:10:07.274614 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.274627 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.274661 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:07.274752 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:07.298768 1498704 cri.go:89] found id: ""
	I1217 02:10:07.298833 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.298859 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.298872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:07.298944 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:07.322447 1498704 cri.go:89] found id: ""
	I1217 02:10:07.322510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.322534 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.322549 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.322561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.392049 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.392072 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:07.392086 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:07.419785 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.419819 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.448497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.448525 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.505149 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.505186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.022238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.034403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:10.034482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:10.061856 1498704 cri.go:89] found id: ""
	I1217 02:10:10.061882 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.061891 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.061897 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:10.061976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:10.089092 1498704 cri.go:89] found id: ""
	I1217 02:10:10.089118 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.089128 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.089141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:10.089217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:10.115444 1498704 cri.go:89] found id: ""
	I1217 02:10:10.115467 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.115476 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.115482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:10.115579 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:10.142860 1498704 cri.go:89] found id: ""
	I1217 02:10:10.142889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.142897 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.142904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:10.142975 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:10.171034 1498704 cri.go:89] found id: ""
	I1217 02:10:10.171061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.171070 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.171076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:10.171135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:10.201087 1498704 cri.go:89] found id: ""
	I1217 02:10:10.201121 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.201130 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.201137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:10.201206 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:10.227252 1498704 cri.go:89] found id: ""
	I1217 02:10:10.227316 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.227340 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.227353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:10.227429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:10.256814 1498704 cri.go:89] found id: ""
	I1217 02:10:10.256850 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.256859 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.256885 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.256905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.316432 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.316484 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.331782 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.331807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.418862 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.418886 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:10.418898 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:10.447108 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.447142 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:12.978148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.988751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:12.988821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:13.014409 1498704 cri.go:89] found id: ""
	I1217 02:10:13.014435 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.014445 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.014452 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:13.014516 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:13.039697 1498704 cri.go:89] found id: ""
	I1217 02:10:13.039725 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.039734 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.039741 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:13.039830 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:13.063238 1498704 cri.go:89] found id: ""
	I1217 02:10:13.063263 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.063272 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.063279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:13.063337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:13.087932 1498704 cri.go:89] found id: ""
	I1217 02:10:13.087955 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.087964 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.087970 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:13.088029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:13.116779 1498704 cri.go:89] found id: ""
	I1217 02:10:13.116824 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.116833 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.116840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:13.116924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:13.152355 1498704 cri.go:89] found id: ""
	I1217 02:10:13.152379 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.152388 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.152395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:13.152462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:13.178465 1498704 cri.go:89] found id: ""
	I1217 02:10:13.178498 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.178507 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.178513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:13.178597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:13.204065 1498704 cri.go:89] found id: ""
	I1217 02:10:13.204090 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.204099 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.204109 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.204119 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:13.260597 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.260643 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.275806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.275834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.339094 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.339116 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:13.339128 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:13.364711 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.364742 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:15.901294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:15.915207 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:15.915287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:15.944035 1498704 cri.go:89] found id: ""
	I1217 02:10:15.944062 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.944071 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:15.944078 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:15.944142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:15.969105 1498704 cri.go:89] found id: ""
	I1217 02:10:15.969132 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.969142 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:15.969148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:15.969213 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:15.994468 1498704 cri.go:89] found id: ""
	I1217 02:10:15.994495 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.994505 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:15.994511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:15.994576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:16.021869 1498704 cri.go:89] found id: ""
	I1217 02:10:16.021897 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.021907 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.021914 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:16.021981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:16.050208 1498704 cri.go:89] found id: ""
	I1217 02:10:16.050236 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.050245 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.050252 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:16.050319 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:16.076004 1498704 cri.go:89] found id: ""
	I1217 02:10:16.076031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.076041 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.076048 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:16.076159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:16.102446 1498704 cri.go:89] found id: ""
	I1217 02:10:16.102526 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.102550 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.102563 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:16.102643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:16.134280 1498704 cri.go:89] found id: ""
	I1217 02:10:16.134306 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.134315 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.134325 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.134362 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:16.173187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.173220 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.231927 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.231960 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.247063 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.247093 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.315647 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.315668 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:16.315681 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:18.841379 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:18.852146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:18.852219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:18.877675 1498704 cri.go:89] found id: ""
	I1217 02:10:18.877750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.877765 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:18.877773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:18.877839 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:18.903447 1498704 cri.go:89] found id: ""
	I1217 02:10:18.903482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.903491 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:18.903498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:18.903576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:18.929561 1498704 cri.go:89] found id: ""
	I1217 02:10:18.929588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.929597 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:18.929604 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:18.929683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:18.955239 1498704 cri.go:89] found id: ""
	I1217 02:10:18.955333 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.955350 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:18.955358 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:18.955424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:18.979922 1498704 cri.go:89] found id: ""
	I1217 02:10:18.979953 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.979962 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:18.979968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:18.980035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:19.007041 1498704 cri.go:89] found id: ""
	I1217 02:10:19.007077 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.007087 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.007093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:19.007177 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:19.035426 1498704 cri.go:89] found id: ""
	I1217 02:10:19.035450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.035459 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.035466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:19.035542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:19.060135 1498704 cri.go:89] found id: ""
	I1217 02:10:19.060159 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.060167 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.060200 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.060217 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.116693 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.116728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.134579 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.134610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.216066 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.216089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:19.216105 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:19.242169 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.242202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:21.771406 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:21.782951 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:21.783026 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:21.809728 1498704 cri.go:89] found id: ""
	I1217 02:10:21.809750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.809758 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:21.809765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:21.809824 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:21.841207 1498704 cri.go:89] found id: ""
	I1217 02:10:21.841233 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.841242 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:21.841248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:21.841307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:21.868982 1498704 cri.go:89] found id: ""
	I1217 02:10:21.869008 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.869017 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:21.869023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:21.869102 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:21.895994 1498704 cri.go:89] found id: ""
	I1217 02:10:21.896030 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.896040 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:21.896046 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:21.896117 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:21.927675 1498704 cri.go:89] found id: ""
	I1217 02:10:21.927767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.927786 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:21.927798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:21.927886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:21.956133 1498704 cri.go:89] found id: ""
	I1217 02:10:21.956157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.956166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:21.956172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:21.956235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:21.987411 1498704 cri.go:89] found id: ""
	I1217 02:10:21.987442 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.987451 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:21.987458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:21.987528 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:22.018001 1498704 cri.go:89] found id: ""
	I1217 02:10:22.018031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:22.018041 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.018058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.018072 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.077509 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.077544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:22.094048 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.094152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.179483 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.179527 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:22.179552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:22.208002 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.208053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.745839 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:24.756980 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:24.757073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:24.781924 1498704 cri.go:89] found id: ""
	I1217 02:10:24.781947 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.781955 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:24.781962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:24.782022 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:24.807686 1498704 cri.go:89] found id: ""
	I1217 02:10:24.807709 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.807718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:24.807725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:24.807785 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:24.833146 1498704 cri.go:89] found id: ""
	I1217 02:10:24.833177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.833197 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:24.833204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:24.833268 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:24.859474 1498704 cri.go:89] found id: ""
	I1217 02:10:24.859496 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.859505 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:24.859523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:24.859585 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:24.885498 1498704 cri.go:89] found id: ""
	I1217 02:10:24.885523 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.885532 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:24.885549 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:24.885608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:24.910357 1498704 cri.go:89] found id: ""
	I1217 02:10:24.910394 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.910403 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:24.910410 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:24.910487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:24.935548 1498704 cri.go:89] found id: ""
	I1217 02:10:24.935572 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.935581 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:24.935588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:24.935650 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:24.961748 1498704 cri.go:89] found id: ""
	I1217 02:10:24.961774 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.961813 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:24.961831 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:24.961852 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.989413 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:24.989488 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.046752 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.046797 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.074232 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.074268 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.166951 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.166980 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:25.166994 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.699737 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:27.710317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:27.710401 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:27.735667 1498704 cri.go:89] found id: ""
	I1217 02:10:27.735694 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.735703 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:27.735709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:27.735770 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:27.764035 1498704 cri.go:89] found id: ""
	I1217 02:10:27.764061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.764070 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:27.764076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:27.764136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:27.788237 1498704 cri.go:89] found id: ""
	I1217 02:10:27.788265 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.788273 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:27.788280 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:27.788340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:27.815686 1498704 cri.go:89] found id: ""
	I1217 02:10:27.815714 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.815723 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:27.815730 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:27.815792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:27.846482 1498704 cri.go:89] found id: ""
	I1217 02:10:27.846510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.846518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:27.846525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:27.846584 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:27.871189 1498704 cri.go:89] found id: ""
	I1217 02:10:27.871217 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.871227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:27.871233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:27.871292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:27.899034 1498704 cri.go:89] found id: ""
	I1217 02:10:27.899056 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.899064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:27.899070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:27.899128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:27.923014 1498704 cri.go:89] found id: ""
	I1217 02:10:27.923037 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.923046 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:27.923055 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:27.923066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.948254 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:27.948289 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:27.978557 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:27.978582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.033709 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.033748 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:28.049287 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:28.049315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:28.120598 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.621228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:30.633415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:30.633544 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:30.660114 1498704 cri.go:89] found id: ""
	I1217 02:10:30.660186 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.660208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:30.660228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:30.660315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:30.687423 1498704 cri.go:89] found id: ""
	I1217 02:10:30.687450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.687459 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:30.687466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:30.687542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:30.712536 1498704 cri.go:89] found id: ""
	I1217 02:10:30.712568 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.712577 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:30.712584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:30.712658 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:30.736913 1498704 cri.go:89] found id: ""
	I1217 02:10:30.736983 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.737007 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:30.737025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:30.737115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:30.761778 1498704 cri.go:89] found id: ""
	I1217 02:10:30.761852 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.761875 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:30.761889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:30.761963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:30.789829 1498704 cri.go:89] found id: ""
	I1217 02:10:30.789854 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.789863 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:30.789869 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:30.789930 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:30.815268 1498704 cri.go:89] found id: ""
	I1217 02:10:30.815296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.815304 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:30.815311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:30.815373 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:30.839769 1498704 cri.go:89] found id: ""
	I1217 02:10:30.839793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.839802 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:30.839811 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:30.839823 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:30.854187 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:30.854216 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:30.917680 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.917706 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:30.917718 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:30.943267 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:30.943300 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:30.970294 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:30.970374 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.525981 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:33.536356 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:33.536427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:33.561187 1498704 cri.go:89] found id: ""
	I1217 02:10:33.561210 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.561219 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:33.561225 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:33.561287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:33.589979 1498704 cri.go:89] found id: ""
	I1217 02:10:33.590002 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.590012 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:33.590023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:33.590082 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:33.615543 1498704 cri.go:89] found id: ""
	I1217 02:10:33.615567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.615576 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:33.615583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:33.615644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:33.648052 1498704 cri.go:89] found id: ""
	I1217 02:10:33.648080 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.648089 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:33.648095 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:33.648162 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:33.676343 1498704 cri.go:89] found id: ""
	I1217 02:10:33.676376 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.676386 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:33.676392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:33.676459 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:33.707262 1498704 cri.go:89] found id: ""
	I1217 02:10:33.707338 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.707353 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:33.707359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:33.707419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:33.732853 1498704 cri.go:89] found id: ""
	I1217 02:10:33.732920 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.732945 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:33.732963 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:33.733053 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:33.757542 1498704 cri.go:89] found id: ""
	I1217 02:10:33.757567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.757576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:33.757585 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:33.757596 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:33.821758 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:33.821777 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:33.821791 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:33.846519 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:33.846555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:33.873755 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:33.873782 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.930246 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:33.930282 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.445766 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:36.456503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:36.456576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:36.483872 1498704 cri.go:89] found id: ""
	I1217 02:10:36.483894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.483903 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:36.483909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:36.483970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:36.508742 1498704 cri.go:89] found id: ""
	I1217 02:10:36.508765 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.508774 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:36.508780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:36.508838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:36.535472 1498704 cri.go:89] found id: ""
	I1217 02:10:36.535511 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.535520 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:36.535527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:36.535591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:36.566274 1498704 cri.go:89] found id: ""
	I1217 02:10:36.566296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.566305 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:36.566311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:36.566372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:36.590882 1498704 cri.go:89] found id: ""
	I1217 02:10:36.590904 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.590912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:36.590918 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:36.590977 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:36.614768 1498704 cri.go:89] found id: ""
	I1217 02:10:36.614793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.614802 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:36.614808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:36.614889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:36.643752 1498704 cri.go:89] found id: ""
	I1217 02:10:36.643778 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.643787 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:36.643794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:36.643857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:36.672151 1498704 cri.go:89] found id: ""
	I1217 02:10:36.672177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.672186 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:36.672194 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:36.672208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:36.733511 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:36.733544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.752180 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:36.752255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:36.815443 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:36.815465 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:36.815478 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:36.840305 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:36.840349 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:39.373770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:39.386294 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:39.386380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:39.420073 1498704 cri.go:89] found id: ""
	I1217 02:10:39.420117 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.420126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:39.420132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:39.420210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:39.454303 1498704 cri.go:89] found id: ""
	I1217 02:10:39.454327 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.454338 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:39.454344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:39.454402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:39.483117 1498704 cri.go:89] found id: ""
	I1217 02:10:39.483143 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.483152 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:39.483159 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:39.483236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:39.507851 1498704 cri.go:89] found id: ""
	I1217 02:10:39.507927 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.507942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:39.507949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:39.508011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:39.535318 1498704 cri.go:89] found id: ""
	I1217 02:10:39.535344 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.535353 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:39.535359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:39.535460 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:39.559510 1498704 cri.go:89] found id: ""
	I1217 02:10:39.559587 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.559602 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:39.559610 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:39.559670 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:39.588446 1498704 cri.go:89] found id: ""
	I1217 02:10:39.588477 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.588487 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:39.588493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:39.588597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:39.616016 1498704 cri.go:89] found id: ""
	I1217 02:10:39.616041 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.616049 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:39.616058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:39.616069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:39.678516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:39.678553 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:39.698413 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:39.698440 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:39.766310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:39.766333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:39.766347 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:39.791602 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:39.791641 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:42.319919 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:42.330880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:42.330962 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:42.355776 1498704 cri.go:89] found id: ""
	I1217 02:10:42.355798 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.355807 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:42.355813 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:42.355872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:42.393050 1498704 cri.go:89] found id: ""
	I1217 02:10:42.393084 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.393093 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:42.393100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:42.393159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:42.426120 1498704 cri.go:89] found id: ""
	I1217 02:10:42.426157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.426166 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:42.426174 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:42.426245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:42.456881 1498704 cri.go:89] found id: ""
	I1217 02:10:42.456917 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.456926 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:42.456932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:42.456999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:42.481272 1498704 cri.go:89] found id: ""
	I1217 02:10:42.481298 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.481307 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:42.481312 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:42.481372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:42.506468 1498704 cri.go:89] found id: ""
	I1217 02:10:42.506497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.506506 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:42.506512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:42.506572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:42.531395 1498704 cri.go:89] found id: ""
	I1217 02:10:42.531460 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.531476 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:42.531484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:42.531552 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:42.555791 1498704 cri.go:89] found id: ""
	I1217 02:10:42.555814 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.555822 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:42.555831 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:42.555843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:42.611764 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:42.611800 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:42.627436 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:42.627463 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:42.717562 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:42.717584 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:42.717597 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:42.742727 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:42.742763 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:45.269723 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:45.281660 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:45.281736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:45.307916 1498704 cri.go:89] found id: ""
	I1217 02:10:45.307941 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.307950 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:45.307956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:45.308021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:45.337837 1498704 cri.go:89] found id: ""
	I1217 02:10:45.337862 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.337871 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:45.337878 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:45.337943 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:45.382867 1498704 cri.go:89] found id: ""
	I1217 02:10:45.382894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.382903 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:45.382909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:45.382970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:45.424600 1498704 cri.go:89] found id: ""
	I1217 02:10:45.424629 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.424637 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:45.424644 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:45.424707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:45.456469 1498704 cri.go:89] found id: ""
	I1217 02:10:45.456497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.456505 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:45.456511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:45.456574 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:45.482345 1498704 cri.go:89] found id: ""
	I1217 02:10:45.482370 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.482378 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:45.482385 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:45.482450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:45.507901 1498704 cri.go:89] found id: ""
	I1217 02:10:45.507930 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.507948 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:45.507955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:45.508065 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:45.532875 1498704 cri.go:89] found id: ""
	I1217 02:10:45.532896 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.532904 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:45.532913 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:45.532924 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:45.589239 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:45.589273 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:45.604011 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:45.604045 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:45.695710 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:45.695788 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:45.695808 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:45.721274 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:45.721310 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.251294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:48.261750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.261825 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.286414 1498704 cri.go:89] found id: ""
	I1217 02:10:48.286441 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.286450 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.286457 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.286515 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.315314 1498704 cri.go:89] found id: ""
	I1217 02:10:48.315336 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.315344 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.315351 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.315411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.339435 1498704 cri.go:89] found id: ""
	I1217 02:10:48.339461 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.339469 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.339476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.339543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.363969 1498704 cri.go:89] found id: ""
	I1217 02:10:48.364045 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.364061 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.364069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.364134 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.391387 1498704 cri.go:89] found id: ""
	I1217 02:10:48.391409 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.391418 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.391425 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.391489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.422985 1498704 cri.go:89] found id: ""
	I1217 02:10:48.423006 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.423014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.423021 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.423081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.451561 1498704 cri.go:89] found id: ""
	I1217 02:10:48.451588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.451598 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.451605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:48.451667 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:48.477573 1498704 cri.go:89] found id: ""
	I1217 02:10:48.477597 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.477607 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:48.477616 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:48.477627 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:48.503190 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:48.503227 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.531901 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:48.531927 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:48.590637 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:48.590670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:48.606410 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:48.606441 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:48.698001 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.198775 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:51.210128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:51.210207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:51.239455 1498704 cri.go:89] found id: ""
	I1217 02:10:51.239482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.239491 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:51.239504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:51.239587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:51.265468 1498704 cri.go:89] found id: ""
	I1217 02:10:51.265541 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.265565 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:51.265583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:51.265684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:51.290269 1498704 cri.go:89] found id: ""
	I1217 02:10:51.290294 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.290303 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:51.290310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:51.290403 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:51.315672 1498704 cri.go:89] found id: ""
	I1217 02:10:51.315697 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.315706 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:51.315712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:51.315775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:51.345852 1498704 cri.go:89] found id: ""
	I1217 02:10:51.345922 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.345938 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:51.345945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:51.346021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:51.374855 1498704 cri.go:89] found id: ""
	I1217 02:10:51.374884 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.374892 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:51.374899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:51.374967 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:51.408516 1498704 cri.go:89] found id: ""
	I1217 02:10:51.408553 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.408563 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:51.408569 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:51.408636 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:51.443401 1498704 cri.go:89] found id: ""
	I1217 02:10:51.443428 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.443436 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:51.443445 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:51.443474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:51.499872 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:51.499907 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:51.514690 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:51.514759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:51.581421 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.581455 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:51.581470 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:51.606921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:51.606964 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.151396 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:54.162403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:54.162479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:54.188307 1498704 cri.go:89] found id: ""
	I1217 02:10:54.188331 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.188340 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:54.188347 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:54.188411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:54.222781 1498704 cri.go:89] found id: ""
	I1217 02:10:54.222803 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.222818 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:54.222824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:54.222886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:54.251344 1498704 cri.go:89] found id: ""
	I1217 02:10:54.251415 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.251439 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:54.251451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:54.251535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:54.280867 1498704 cri.go:89] found id: ""
	I1217 02:10:54.280889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.280898 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:54.280904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:54.280966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:54.306150 1498704 cri.go:89] found id: ""
	I1217 02:10:54.306177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.306185 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:54.306192 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:54.306250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:54.330272 1498704 cri.go:89] found id: ""
	I1217 02:10:54.330296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.330310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:54.330317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:54.330375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:54.359393 1498704 cri.go:89] found id: ""
	I1217 02:10:54.359423 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.359431 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:54.359438 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:54.359525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:54.392745 1498704 cri.go:89] found id: ""
	I1217 02:10:54.392780 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.392804 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:54.392822 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:54.392835 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:54.469149 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:54.469171 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:54.469185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:54.495699 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:54.495738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.524004 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:54.524031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:54.579558 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:54.579592 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.095655 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:57.106067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:57.106145 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:57.130932 1498704 cri.go:89] found id: ""
	I1217 02:10:57.130961 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.130970 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:57.130976 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:57.131046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:57.160073 1498704 cri.go:89] found id: ""
	I1217 02:10:57.160098 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.160107 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:57.160113 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:57.160173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:57.184768 1498704 cri.go:89] found id: ""
	I1217 02:10:57.184793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.184802 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:57.184808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:57.184867 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:57.210332 1498704 cri.go:89] found id: ""
	I1217 02:10:57.210358 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.210367 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:57.210374 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:57.210457 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:57.234920 1498704 cri.go:89] found id: ""
	I1217 02:10:57.234984 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.234999 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:57.235007 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:57.235072 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:57.260151 1498704 cri.go:89] found id: ""
	I1217 02:10:57.260183 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.260193 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:57.260201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:57.260310 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:57.287966 1498704 cri.go:89] found id: ""
	I1217 02:10:57.288000 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.288009 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:57.288032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:57.288115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:57.312191 1498704 cri.go:89] found id: ""
	I1217 02:10:57.312252 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.312284 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:57.312306 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:57.312330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:57.344168 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:57.344196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:57.400635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:57.400672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.416567 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:57.416594 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:57.485990 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:57.486013 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:57.486028 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.011650 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:00.083065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:00.083205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:00.177092 1498704 cri.go:89] found id: ""
	I1217 02:11:00.177120 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.177129 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:00.177137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:00.177210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:00.240557 1498704 cri.go:89] found id: ""
	I1217 02:11:00.240645 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.240670 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:00.240689 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:00.240818 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:00.290983 1498704 cri.go:89] found id: ""
	I1217 02:11:00.291075 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.291101 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:00.291120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:00.291245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:00.339816 1498704 cri.go:89] found id: ""
	I1217 02:11:00.339906 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.339935 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:00.339955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:00.340060 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:00.400482 1498704 cri.go:89] found id: ""
	I1217 02:11:00.400508 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.400516 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:00.400525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:00.400594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:00.437316 1498704 cri.go:89] found id: ""
	I1217 02:11:00.437386 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.437413 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:00.437432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:00.437531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:00.464791 1498704 cri.go:89] found id: ""
	I1217 02:11:00.464859 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.464881 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:00.464899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:00.464986 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:00.492400 1498704 cri.go:89] found id: ""
	I1217 02:11:00.492468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.492492 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:00.492514 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:00.492551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:00.549202 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:00.549237 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:00.564046 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:00.564073 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:00.636379 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:00.636409 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:00.636423 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.666039 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:00.666076 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.197992 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:03.209540 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:03.209610 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:03.237337 1498704 cri.go:89] found id: ""
	I1217 02:11:03.237411 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.237436 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:03.237458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:03.237545 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:03.262191 1498704 cri.go:89] found id: ""
	I1217 02:11:03.262213 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.262221 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:03.262228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:03.262286 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:03.286816 1498704 cri.go:89] found id: ""
	I1217 02:11:03.286840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.286850 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:03.286856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:03.286915 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:03.310933 1498704 cri.go:89] found id: ""
	I1217 02:11:03.311007 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.311023 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:03.311031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:03.311089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:03.334605 1498704 cri.go:89] found id: ""
	I1217 02:11:03.334628 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.334637 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:03.334643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:03.334701 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:03.359646 1498704 cri.go:89] found id: ""
	I1217 02:11:03.359681 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.359690 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:03.359697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:03.359789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:03.391919 1498704 cri.go:89] found id: ""
	I1217 02:11:03.391946 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.391955 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:03.391962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:03.392025 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:03.419543 1498704 cri.go:89] found id: ""
	I1217 02:11:03.419567 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.419576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:03.419586 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:03.419600 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.455897 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:03.455925 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:03.512216 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:03.512255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:03.527344 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:03.527372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:03.591374 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:03.591396 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:03.591408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.117735 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:06.128394 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:06.128466 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:06.155397 1498704 cri.go:89] found id: ""
	I1217 02:11:06.155420 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.155430 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:06.155436 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:06.155669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:06.185554 1498704 cri.go:89] found id: ""
	I1217 02:11:06.185631 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.185682 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:06.185697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:06.185769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:06.214540 1498704 cri.go:89] found id: ""
	I1217 02:11:06.214564 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.214573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:06.214579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:06.214637 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:06.240468 1498704 cri.go:89] found id: ""
	I1217 02:11:06.240492 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.240501 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:06.240507 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:06.240570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:06.266674 1498704 cri.go:89] found id: ""
	I1217 02:11:06.266697 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.266706 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:06.266712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:06.266781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:06.292194 1498704 cri.go:89] found id: ""
	I1217 02:11:06.292218 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.292227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:06.292233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:06.292295 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:06.320979 1498704 cri.go:89] found id: ""
	I1217 02:11:06.321002 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.321011 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:06.321017 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:06.321074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:06.347269 1498704 cri.go:89] found id: ""
	I1217 02:11:06.347294 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.347303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:06.347315 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:06.347326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:06.409046 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:06.409101 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:06.425379 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:06.425406 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.490322 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.490345 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:06.490357 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.515786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:06.515825 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.043785 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:09.054506 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:09.054580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:09.079819 1498704 cri.go:89] found id: ""
	I1217 02:11:09.079848 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.079856 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:09.079862 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:09.079921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:09.104928 1498704 cri.go:89] found id: ""
	I1217 02:11:09.104953 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.104963 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:09.104969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:09.105031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:09.130212 1498704 cri.go:89] found id: ""
	I1217 02:11:09.130238 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.130246 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:09.130255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:09.130358 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:09.159130 1498704 cri.go:89] found id: ""
	I1217 02:11:09.159153 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.159162 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:09.159169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:09.159245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:09.184267 1498704 cri.go:89] found id: ""
	I1217 02:11:09.184292 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.184301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:09.184307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:09.184371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:09.209170 1498704 cri.go:89] found id: ""
	I1217 02:11:09.209195 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.209204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:09.209210 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:09.209271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:09.235842 1498704 cri.go:89] found id: ""
	I1217 02:11:09.235869 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.235878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:09.235884 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:09.235946 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:09.265413 1498704 cri.go:89] found id: ""
	I1217 02:11:09.265445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.265454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:09.265463 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.265475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.302759 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:09.302784 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:09.358361 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:09.358394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:09.378248 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:09.378278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.451227 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.451247 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:09.451260 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:11.977784 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.988725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:11.988798 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:12.015755 1498704 cri.go:89] found id: ""
	I1217 02:11:12.015778 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.015788 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:12.015795 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:12.015866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:12.042225 1498704 cri.go:89] found id: ""
	I1217 02:11:12.042250 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.042259 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:12.042269 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:12.042328 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:12.067951 1498704 cri.go:89] found id: ""
	I1217 02:11:12.067977 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.067987 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:12.067993 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:12.068054 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:12.094539 1498704 cri.go:89] found id: ""
	I1217 02:11:12.094565 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.094574 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:12.094580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:12.094641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:12.120422 1498704 cri.go:89] found id: ""
	I1217 02:11:12.120445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.120454 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:12.120461 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:12.120521 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:12.146437 1498704 cri.go:89] found id: ""
	I1217 02:11:12.146465 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.146491 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:12.146498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:12.146560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:12.171817 1498704 cri.go:89] found id: ""
	I1217 02:11:12.171840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.171849 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:12.171855 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:12.171914 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:12.200987 1498704 cri.go:89] found id: ""
	I1217 02:11:12.201013 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.201022 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:12.201031 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.201043 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:12.232701 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:12.232731 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.288687 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.288722 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.303401 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.303479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.371087 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.371112 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:12.371125 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:14.899732 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.913037 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:14.913112 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:14.939368 1498704 cri.go:89] found id: ""
	I1217 02:11:14.939399 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.939408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.939415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:14.939476 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:14.964809 1498704 cri.go:89] found id: ""
	I1217 02:11:14.964835 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.964844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.964849 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:14.964911 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:14.992442 1498704 cri.go:89] found id: ""
	I1217 02:11:14.992468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.992477 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.992483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:14.992542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:15.029492 1498704 cri.go:89] found id: ""
	I1217 02:11:15.029518 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.029527 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:15.029534 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:15.029604 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:15.059736 1498704 cri.go:89] found id: ""
	I1217 02:11:15.059760 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.059770 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:15.059776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:15.059841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:15.086908 1498704 cri.go:89] found id: ""
	I1217 02:11:15.086991 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.087014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:15.087029 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:15.087104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:15.113800 1498704 cri.go:89] found id: ""
	I1217 02:11:15.113829 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.113838 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.113844 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:15.113903 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:15.140421 1498704 cri.go:89] found id: ""
	I1217 02:11:15.140445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.140454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.140463 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.140475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.197971 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.198003 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.213157 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.213186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.278282 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.278303 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:15.278316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:15.303867 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.303900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.833800 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.844470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:17.844546 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:17.871228 1498704 cri.go:89] found id: ""
	I1217 02:11:17.871254 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.871262 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.871270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:17.871345 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:17.909403 1498704 cri.go:89] found id: ""
	I1217 02:11:17.909430 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.909438 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.909444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:17.909505 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:17.942319 1498704 cri.go:89] found id: ""
	I1217 02:11:17.942341 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.942348 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.942355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:17.942416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:17.967521 1498704 cri.go:89] found id: ""
	I1217 02:11:17.967546 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.967554 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.967561 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:17.967619 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:17.995465 1498704 cri.go:89] found id: ""
	I1217 02:11:17.995488 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.995518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:17.995526 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:17.995587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:18.023559 1498704 cri.go:89] found id: ""
	I1217 02:11:18.023587 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.023596 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.023603 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:18.023664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:18.049983 1498704 cri.go:89] found id: ""
	I1217 02:11:18.050011 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.050027 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.050033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:18.050096 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:18.081999 1498704 cri.go:89] found id: ""
	I1217 02:11:18.082023 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.082033 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.082042 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.082054 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.096662 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.096692 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.160156 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.160179 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:18.160192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:18.185291 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.185325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.216271 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.216298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.775311 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:20.789631 1498704 out.go:203] 
	W1217 02:11:20.792902 1498704 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:11:20.792939 1498704 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:11:20.792950 1498704 out.go:285] * Related issues:
	* Related issues:
	W1217 02:11:20.792967 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:11:20.792986 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:11:20.795906 1498704 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-456492
helpers_test.go:244: (dbg) docker inspect newest-cni-456492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	        "Created": "2025-12-17T01:55:16.478266179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1498839,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:13.106483917Z",
	            "FinishedAt": "2025-12-17T02:05:11.800057613Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hosts",
	        "LogPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2-json.log",
	        "Name": "/newest-cni-456492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-456492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-456492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	                "LowerDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456492",
	                "Source": "/var/lib/docker/volumes/newest-cni-456492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456492",
	                "name.minikube.sigs.k8s.io": "newest-cni-456492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab62f167f6067cd4de4467e8c5dccfa413a051915ec69dabeccc65bc59cf0aee",
	            "SandboxKey": "/var/run/docker/netns/ab62f167f606",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-456492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:ab:b6:47:86:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78c732410c8ee8b3c147900aac111eb07f35c057f64efcecb5d20570fed785bc",
	                    "EndpointID": "c3b1f12eab3f1b8581f7a3375c215b8790019ebdc7d258d9fd03a25fc5d36dd1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456492",
	                        "72c4fe7eb784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (346.045273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25: (1.563455819s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ default-k8s-diff-port-069646 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ pause   │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ unpause │ -p default-k8s-diff-port-069646 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p newest-cni-456492 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-456492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:12.850501 1498704 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:05:12.850637 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.850649 1498704 out.go:374] Setting ErrFile to fd 2...
	I1217 02:05:12.850655 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.851041 1498704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:05:12.851511 1498704 out.go:368] Setting JSON to false
	I1217 02:05:12.852479 1498704 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28063,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:05:12.852572 1498704 start.go:143] virtualization:  
	I1217 02:05:12.855474 1498704 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:05:12.857672 1498704 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:12.857773 1498704 notify.go:221] Checking for updates...
	I1217 02:05:12.863254 1498704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:12.866037 1498704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:12.868948 1498704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:05:12.871863 1498704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:05:12.874787 1498704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:12.878103 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:12.878662 1498704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:12.900447 1498704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:05:12.900598 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:12.960234 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:12.950894493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:12.960347 1498704 docker.go:319] overlay module found
	I1217 02:05:12.963370 1498704 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:12.966210 1498704 start.go:309] selected driver: docker
	I1217 02:05:12.966233 1498704 start.go:927] validating driver "docker" against &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:12.966382 1498704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:12.967091 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:13.019814 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:13.010546439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:13.020178 1498704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:05:13.020210 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:13.020262 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:13.020307 1498704 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:13.023434 1498704 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 02:05:13.026234 1498704 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:05:13.029131 1498704 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:13.031994 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:13.032048 1498704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 02:05:13.032060 1498704 cache.go:65] Caching tarball of preloaded images
	I1217 02:05:13.032113 1498704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:13.032150 1498704 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:05:13.032162 1498704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 02:05:13.032281 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.052501 1498704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:13.052525 1498704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:13.052542 1498704 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:13.052572 1498704 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:13.052633 1498704 start.go:364] duration metric: took 38.597µs to acquireMachinesLock for "newest-cni-456492"
	I1217 02:05:13.052657 1498704 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:13.052663 1498704 fix.go:54] fixHost starting: 
	I1217 02:05:13.052926 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.069585 1498704 fix.go:112] recreateIfNeeded on newest-cni-456492: state=Stopped err=<nil>
	W1217 02:05:13.069617 1498704 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 02:05:11.635157 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:14.135122 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:16.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:13.072747 1498704 out.go:252] * Restarting existing docker container for "newest-cni-456492" ...
	I1217 02:05:13.072837 1498704 cli_runner.go:164] Run: docker start newest-cni-456492
	I1217 02:05:13.388698 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.414091 1498704 kic.go:430] container "newest-cni-456492" state is running.
	I1217 02:05:13.414525 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:13.433261 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.433961 1498704 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:13.434162 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:13.455043 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:13.455367 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:13.455376 1498704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:13.456190 1498704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:16.589394 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.589424 1498704 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 02:05:16.589509 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.608291 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.608611 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.608628 1498704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 02:05:16.748318 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.748417 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.766749 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.767082 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.767106 1498704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:16.899757 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:16.899788 1498704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:05:16.899820 1498704 ubuntu.go:190] setting up certificates
	I1217 02:05:16.899839 1498704 provision.go:84] configureAuth start
	I1217 02:05:16.899906 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:16.924665 1498704 provision.go:143] copyHostCerts
	I1217 02:05:16.924743 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:05:16.924752 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:05:16.924828 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:05:16.924938 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:05:16.924943 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:05:16.924976 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:05:16.925038 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:05:16.925047 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:05:16.925072 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:05:16.925127 1498704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 02:05:17.601803 1498704 provision.go:177] copyRemoteCerts
	I1217 02:05:17.601873 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:17.601926 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.636357 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.741722 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:05:17.761034 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:17.779707 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:17.797837 1498704 provision.go:87] duration metric: took 897.968313ms to configureAuth
	I1217 02:05:17.797870 1498704 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:17.798087 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:17.798100 1498704 machine.go:97] duration metric: took 4.364124237s to provisionDockerMachine
	I1217 02:05:17.798118 1498704 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 02:05:17.798134 1498704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:17.798198 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:17.798254 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.815970 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.909838 1498704 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:17.913351 1498704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:17.913383 1498704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:17.913395 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:05:17.913453 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:05:17.913544 1498704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:05:17.913681 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:17.921360 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:17.939679 1498704 start.go:296] duration metric: took 141.5414ms for postStartSetup
	I1217 02:05:17.939826 1498704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:17.939877 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.957594 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.059706 1498704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:18.065122 1498704 fix.go:56] duration metric: took 5.012436797s for fixHost
	I1217 02:05:18.065156 1498704 start.go:83] releasing machines lock for "newest-cni-456492", held for 5.012509749s
	I1217 02:05:18.065242 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:18.082756 1498704 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:18.082825 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.083064 1498704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:05:18.083126 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.102210 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.102306 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.193581 1498704 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:18.286865 1498704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:18.291506 1498704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:18.291604 1498704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:18.301001 1498704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:18.301023 1498704 start.go:496] detecting cgroup driver to use...
	I1217 02:05:18.301056 1498704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:18.301104 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:05:18.318916 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:18.332388 1498704 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:05:18.332450 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:05:18.348560 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:05:18.361841 1498704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:05:18.501489 1498704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:05:18.625467 1498704 docker.go:234] disabling docker service ...
	I1217 02:05:18.625544 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:05:18.642408 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:05:18.656014 1498704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:05:18.765362 1498704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:05:18.886790 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:18.900617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:18.915221 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:18.924900 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:18.934313 1498704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:18.934389 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:18.943795 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.953183 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:18.962127 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.971122 1498704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:18.979419 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:18.988380 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:18.999817 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:19.010244 1498704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:19.018996 1498704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:19.026929 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.133908 1498704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:19.268405 1498704 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:05:19.268490 1498704 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:05:19.272284 1498704 start.go:564] Will wait 60s for crictl version
	I1217 02:05:19.272347 1498704 ssh_runner.go:195] Run: which crictl
	I1217 02:05:19.275756 1498704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:19.301130 1498704 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:05:19.301201 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.322372 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.348617 1498704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:05:19.351633 1498704 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:05:19.367774 1498704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:05:19.371830 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.384786 1498704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:05:19.387816 1498704 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:19.387972 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:19.388067 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.414283 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.414309 1498704 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:05:19.414396 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.439246 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.439272 1498704 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:19.439280 1498704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:05:19.439400 1498704 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:19.439475 1498704 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:05:19.464932 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:19.464957 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:19.464978 1498704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:05:19.465000 1498704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:19.465118 1498704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:19.465204 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:19.473220 1498704 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:19.473323 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:19.481191 1498704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:05:19.494733 1498704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:19.508255 1498704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 02:05:19.521299 1498704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:19.524923 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.534869 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.640328 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:19.658104 1498704 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 02:05:19.658171 1498704 certs.go:195] generating shared ca certs ...
	I1217 02:05:19.658202 1498704 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:19.658408 1498704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:05:19.658487 1498704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:05:19.658525 1498704 certs.go:257] generating profile certs ...
	I1217 02:05:19.658693 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 02:05:19.658805 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 02:05:19.658882 1498704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 02:05:19.659021 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:05:19.659079 1498704 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:19.659103 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:05:19.659164 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:05:19.659220 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:05:19.659286 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:05:19.659364 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:19.660007 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:19.680759 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:05:19.702848 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:19.724636 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:05:19.743745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:19.766745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:19.785567 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:19.805217 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:05:19.823885 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:05:19.842565 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:05:19.861136 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:19.881009 1498704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:19.900011 1498704 ssh_runner.go:195] Run: openssl version
	I1217 02:05:19.907885 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.916589 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:19.925294 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929759 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929879 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.973048 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:19.981056 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:05:19.988859 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:05:19.996704 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001580 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001857 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.047306 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:20.055839 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.063938 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:05:20.072095 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076535 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076605 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.118765 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:20.126976 1498704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:20.131206 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:20.172934 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:20.214362 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:20.255854 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:20.297036 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:20.339864 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:20.381722 1498704 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:20.381822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:05:20.381904 1498704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:05:20.424644 1498704 cri.go:89] found id: ""
	I1217 02:05:20.424764 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:20.433427 1498704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:20.433456 1498704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:20.433550 1498704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:20.441251 1498704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:20.442099 1498704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.442456 1498704 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456492" cluster setting kubeconfig missing "newest-cni-456492" context setting]
	I1217 02:05:20.442986 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.445078 1498704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:20.453918 1498704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:05:20.453968 1498704 kubeadm.go:602] duration metric: took 20.505601ms to restartPrimaryControlPlane
	I1217 02:05:20.453978 1498704 kubeadm.go:403] duration metric: took 72.266987ms to StartCluster
	I1217 02:05:20.453993 1498704 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.454058 1498704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.454938 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.455145 1498704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:05:20.455516 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:20.455530 1498704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:20.455683 1498704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456492"
	I1217 02:05:20.455704 1498704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456492"
	I1217 02:05:20.455734 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456291 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.456447 1498704 addons.go:70] Setting dashboard=true in profile "newest-cni-456492"
	I1217 02:05:20.456459 1498704 addons.go:239] Setting addon dashboard=true in "newest-cni-456492"
	W1217 02:05:20.456465 1498704 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:20.456487 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456873 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.457295 1498704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456492"
	I1217 02:05:20.457327 1498704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456492"
	I1217 02:05:20.457617 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.460758 1498704 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:20.464032 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:20.511072 1498704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:20.511238 1498704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:20.511526 1498704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456492"
	I1217 02:05:20.511584 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.512215 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.514400 1498704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.514426 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:20.514495 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.517419 1498704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 02:05:18.635204 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:21.135093 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:20.520345 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:20.520380 1498704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:20.520470 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.545933 1498704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.545958 1498704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:20.546028 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.571506 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.597655 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.610038 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.744231 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:20.749535 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.770211 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.807578 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:20.807656 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:20.822894 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:20.822966 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:20.838508 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:20.838583 1498704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:20.854473 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:20.854546 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:20.870442 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:20.870510 1498704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:20.892689 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:20.892763 1498704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:20.907212 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:20.907283 1498704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:05:20.920377 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:20.920447 1498704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:20.934242 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:20.934313 1498704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:20.949356 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:21.122136 1498704 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:05:21.122238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:21.122377 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122428 1498704 retry.go:31] will retry after 140.698925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122498 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122514 1498704 retry.go:31] will retry after 200.872114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122730 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122750 1498704 retry.go:31] will retry after 347.753215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.264115 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.324524 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:21.326955 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.326987 1498704 retry.go:31] will retry after 509.503403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.390952 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.391056 1498704 retry.go:31] will retry after 486.50092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.471226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.536155 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.536193 1498704 retry.go:31] will retry after 374.340896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.623199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:21.836797 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.878378 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:21.911452 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.932525 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.932573 1498704 retry.go:31] will retry after 673.446858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.024062 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.024104 1498704 retry.go:31] will retry after 357.640722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.030810 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.030855 1498704 retry.go:31] will retry after 697.108634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.122842 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:22.382402 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.447494 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.447529 1498704 retry.go:31] will retry after 907.58474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.606794 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:22.623237 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:22.712284 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.712316 1498704 retry.go:31] will retry after 1.166453431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.728640 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.790257 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.790294 1498704 retry.go:31] will retry after 693.242896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:23.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:25.634571 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:23.122710 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.356122 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:23.441808 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.441876 1498704 retry.go:31] will retry after 812.660244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.484193 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:23.553009 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.553088 1498704 retry.go:31] will retry after 1.540590446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.878932 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:23.940625 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.940657 1498704 retry.go:31] will retry after 1.715347401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.123129 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:24.255570 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.318166 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.318201 1498704 retry.go:31] will retry after 2.528105033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.622416 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.094702 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.122740 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:25.190434 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.190468 1498704 retry.go:31] will retry after 2.137532007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.622874 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.656976 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.735191 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.735228 1498704 retry.go:31] will retry after 1.824141068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.122718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.622402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.847039 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:26.915825 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.915864 1498704 retry.go:31] will retry after 3.628983163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.123109 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:27.329106 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:27.406949 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.406981 1498704 retry.go:31] will retry after 4.03347247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.560441 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:27.620941 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.620972 1498704 retry.go:31] will retry after 3.991176553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.623048 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:27.635077 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:29.635231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:28.123323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.622690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.123056 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.622383 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.122331 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.545057 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:30.621785 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.621822 1498704 retry.go:31] will retry after 4.4452238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.622853 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.122373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.440743 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:31.509992 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.510031 1498704 retry.go:31] will retry after 5.407597033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.613135 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:31.622584 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:31.697739 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.697776 1498704 retry.go:31] will retry after 2.825488937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:32.122427 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:32.622356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:32.134521 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:34.135119 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:36.135210 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:33.122865 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.622376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.122833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.523532 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:34.583134 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.583163 1498704 retry.go:31] will retry after 5.545323918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.622442 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:35.068147 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:35.122850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:35.134133 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.134169 1498704 retry.go:31] will retry after 4.861802964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.622377 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.122369 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.622378 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.918683 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:36.978447 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.978481 1498704 retry.go:31] will retry after 6.962519237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:37.122560 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:37.622836 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:38.635154 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:41.134707 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:38.122524 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.622862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.122871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.623166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.996206 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:40.063255 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.063292 1498704 retry.go:31] will retry after 7.781680021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.122526 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:40.129164 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:40.214505 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.214533 1498704 retry.go:31] will retry after 8.678807682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.622298 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.122333 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.622358 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.127159 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.622438 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:43.635439 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:46.135272 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:43.122461 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.622352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.941994 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:44.001689 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.001730 1498704 retry.go:31] will retry after 6.066883065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.123123 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:44.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.126164 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.623052 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.122898 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.622334 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.122393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.622323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.845223 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:48.634542 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:50.635081 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:47.908667 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:47.908705 1498704 retry.go:31] will retry after 18.007710991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.122861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.622412 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.894229 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:48.969090 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.969125 1498704 retry.go:31] will retry after 16.055685136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.122381 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:49.622837 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:50.069336 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:50.122996 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:50.134357 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.134397 1498704 retry.go:31] will retry after 18.576318696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.622399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.122356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.623152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.122522 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.622365 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:53.135083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:55.135448 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:53.123228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.622373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.622394 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.122388 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.122434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.622357 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.122345 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.622407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:57.635130 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:00.134795 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:58.122690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.622871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.122944 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.622822 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.123626 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.623133 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.122517 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.622861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.122995 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.622415 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:02.135223 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:04.634982 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:03.122366 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.623001 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.122805 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.622382 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.025226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:05.088234 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.088268 1498704 retry.go:31] will retry after 18.521411157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.122353 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.622518 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.916578 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:05.977704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.977737 1498704 retry.go:31] will retry after 29.235613176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.123051 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:06.623116 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.122863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.622361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:07.134988 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:09.135112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:11.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:08.123131 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.622326 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.711597 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:08.773115 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:08.773147 1498704 retry.go:31] will retry after 24.92518591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:09.122643 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:09.622393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.122375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.622634 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.122959 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.622850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.122346 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.622435 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:13.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:16.134662 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:13.122648 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.622828 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.123317 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.622872 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.122361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.622296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.622835 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.122778 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:18.135126 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:20.135188 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:18.123152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.623163 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.122407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.622841 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.123196 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.622898 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:20.622982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:20.655063 1498704 cri.go:89] found id: ""
	I1217 02:06:20.655091 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.655100 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:20.655106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:20.655169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:20.687901 1498704 cri.go:89] found id: ""
	I1217 02:06:20.687924 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.687932 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:20.687938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:20.687996 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:20.713818 1498704 cri.go:89] found id: ""
	I1217 02:06:20.713845 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.713854 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:20.713860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:20.713918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:20.738353 1498704 cri.go:89] found id: ""
	I1217 02:06:20.738376 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.738384 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:20.738396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:20.738455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:20.763275 1498704 cri.go:89] found id: ""
	I1217 02:06:20.763300 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.763309 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:20.763316 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:20.763377 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:20.787303 1498704 cri.go:89] found id: ""
	I1217 02:06:20.787328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.787337 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:20.787343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:20.787402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:20.812203 1498704 cri.go:89] found id: ""
	I1217 02:06:20.812230 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.812238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:20.812244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:20.812304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:20.836788 1498704 cri.go:89] found id: ""
	I1217 02:06:20.836814 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.836823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:20.836831 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:20.836842 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:20.901301 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:20.901324 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:20.901337 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:20.927207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:20.927244 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:20.955351 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:20.955377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:21.010892 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:21.010928 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:22.635190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:25.135234 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:23.526340 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:23.536950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:23.537021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:23.561240 1498704 cri.go:89] found id: ""
	I1217 02:06:23.561267 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.561276 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:23.561282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:23.561340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:23.586385 1498704 cri.go:89] found id: ""
	I1217 02:06:23.586407 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.586415 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:23.586421 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:23.586479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:23.610820 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:06:23.612177 1498704 cri.go:89] found id: ""
	I1217 02:06:23.612201 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.612210 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:23.612216 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:23.612270 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1217 02:06:23.698147 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698227 1498704 retry.go:31] will retry after 35.769421328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698299 1498704 cri.go:89] found id: ""
	I1217 02:06:23.698328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.698348 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:23.698379 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:23.698473 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:23.730479 1498704 cri.go:89] found id: ""
	I1217 02:06:23.730555 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.730569 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:23.730577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:23.730656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:23.757694 1498704 cri.go:89] found id: ""
	I1217 02:06:23.757717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.757726 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:23.757732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:23.757802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:23.787070 1498704 cri.go:89] found id: ""
	I1217 02:06:23.787145 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.787162 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:23.787170 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:23.787231 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:23.815895 1498704 cri.go:89] found id: ""
	I1217 02:06:23.815928 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.815937 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:23.815947 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:23.815977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:23.845530 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:23.845558 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:23.904348 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:23.904385 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.919409 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:23.919438 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:23.986183 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:23.986246 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:23.986266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:26.512910 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:26.523572 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:26.523644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:26.549045 1498704 cri.go:89] found id: ""
	I1217 02:06:26.549077 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.549087 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:26.549100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:26.549181 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:26.573386 1498704 cri.go:89] found id: ""
	I1217 02:06:26.573409 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.573417 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:26.573423 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:26.573485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:26.597629 1498704 cri.go:89] found id: ""
	I1217 02:06:26.597673 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.597688 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:26.597695 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:26.597755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:26.625905 1498704 cri.go:89] found id: ""
	I1217 02:06:26.625933 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.625942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:26.625949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:26.626016 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:26.663442 1498704 cri.go:89] found id: ""
	I1217 02:06:26.663466 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.663475 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:26.663482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:26.663565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:26.692315 1498704 cri.go:89] found id: ""
	I1217 02:06:26.692342 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.692351 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:26.692362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:26.692422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:26.718259 1498704 cri.go:89] found id: ""
	I1217 02:06:26.718287 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.718296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:26.718303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:26.718361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:26.743360 1498704 cri.go:89] found id: ""
	I1217 02:06:26.743383 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.743391 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:26.743400 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:26.743412 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:26.770132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:26.770158 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:26.829657 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:26.829749 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:26.845511 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:26.845538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:26.912984 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:26.913004 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:26.913017 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:27.635261 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:30.135207 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:29.440066 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:29.450548 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:29.450621 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:29.474768 1498704 cri.go:89] found id: ""
	I1217 02:06:29.474800 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.474809 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:29.474816 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:29.474886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:29.498947 1498704 cri.go:89] found id: ""
	I1217 02:06:29.498969 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.498977 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:29.498983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:29.499041 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:29.523540 1498704 cri.go:89] found id: ""
	I1217 02:06:29.523564 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.523573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:29.523579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:29.523643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:29.556044 1498704 cri.go:89] found id: ""
	I1217 02:06:29.556069 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.556078 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:29.556084 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:29.556144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:29.581373 1498704 cri.go:89] found id: ""
	I1217 02:06:29.581399 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.581408 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:29.581414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:29.581485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:29.607453 1498704 cri.go:89] found id: ""
	I1217 02:06:29.607479 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.607489 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:29.607495 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:29.607576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:29.639841 1498704 cri.go:89] found id: ""
	I1217 02:06:29.639865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.639875 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:29.639881 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:29.639938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:29.670608 1498704 cri.go:89] found id: ""
	I1217 02:06:29.670635 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.670643 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:29.670653 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:29.670665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:29.728148 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:29.728181 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:29.743004 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:29.743029 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:29.815740 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:29.815762 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:29.815775 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.842206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:29.842243 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:32.370825 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:32.383399 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:32.383490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:32.416122 1498704 cri.go:89] found id: ""
	I1217 02:06:32.416148 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.416157 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:32.416164 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:32.416235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:32.450068 1498704 cri.go:89] found id: ""
	I1217 02:06:32.450092 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.450101 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:32.450107 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:32.450176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:32.475101 1498704 cri.go:89] found id: ""
	I1217 02:06:32.475126 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.475135 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:32.475142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:32.475218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:32.500347 1498704 cri.go:89] found id: ""
	I1217 02:06:32.500372 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.500380 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:32.500387 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:32.500447 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:32.525315 1498704 cri.go:89] found id: ""
	I1217 02:06:32.525346 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.525355 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:32.525361 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:32.525440 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:32.550267 1498704 cri.go:89] found id: ""
	I1217 02:06:32.550341 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.550358 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:32.550365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:32.550424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:32.575413 1498704 cri.go:89] found id: ""
	I1217 02:06:32.575438 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.575447 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:32.575453 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:32.575559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:32.603477 1498704 cri.go:89] found id: ""
	I1217 02:06:32.603503 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.603513 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:32.603523 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:32.603568 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:32.669699 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:32.669735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:32.686097 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:32.686126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:32.755583 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:32.755604 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:32.755616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:32.782146 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:32.782195 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:32.135482 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:34.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:33.698737 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:33.767478 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:33.767516 1498704 retry.go:31] will retry after 19.401613005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.214860 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:35.276710 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.276741 1498704 retry.go:31] will retry after 25.686831054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.310030 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:35.320395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:35.320472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:35.344503 1498704 cri.go:89] found id: ""
	I1217 02:06:35.344525 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.344533 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:35.344539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:35.344597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:35.375750 1498704 cri.go:89] found id: ""
	I1217 02:06:35.375773 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.375782 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:35.375788 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:35.375857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:35.403776 1498704 cri.go:89] found id: ""
	I1217 02:06:35.403803 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.403813 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:35.403819 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:35.403878 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:35.437584 1498704 cri.go:89] found id: ""
	I1217 02:06:35.437608 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.437616 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:35.437623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:35.437723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:35.467173 1498704 cri.go:89] found id: ""
	I1217 02:06:35.467207 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.467216 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:35.467223 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:35.467289 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:35.491257 1498704 cri.go:89] found id: ""
	I1217 02:06:35.491284 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.491294 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:35.491301 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:35.491380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:35.515935 1498704 cri.go:89] found id: ""
	I1217 02:06:35.515961 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.515971 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:35.515978 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:35.516077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:35.542706 1498704 cri.go:89] found id: ""
	I1217 02:06:35.542730 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.542739 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:35.542748 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:35.542759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:35.601383 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:35.601428 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:35.616228 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:35.616269 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:35.693548 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:35.693569 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:35.693584 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:35.719247 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:35.719286 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:36.635304 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:39.135165 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:41.135205 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:38.250028 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:38.261967 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:38.262037 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:38.286400 1498704 cri.go:89] found id: ""
	I1217 02:06:38.286423 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.286431 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:38.286437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:38.286499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:38.310618 1498704 cri.go:89] found id: ""
	I1217 02:06:38.310639 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.310647 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:38.310654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:38.310713 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:38.335110 1498704 cri.go:89] found id: ""
	I1217 02:06:38.335136 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.335144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:38.335151 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:38.335214 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:38.364179 1498704 cri.go:89] found id: ""
	I1217 02:06:38.364202 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.364211 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:38.364218 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:38.364278 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:38.402338 1498704 cri.go:89] found id: ""
	I1217 02:06:38.402366 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.402374 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:38.402384 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:38.402443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:38.433053 1498704 cri.go:89] found id: ""
	I1217 02:06:38.433081 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.433090 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:38.433096 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:38.433155 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:38.461635 1498704 cri.go:89] found id: ""
	I1217 02:06:38.461688 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.461698 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:38.461704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:38.461767 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:38.486774 1498704 cri.go:89] found id: ""
	I1217 02:06:38.486798 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.486807 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:38.486816 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:38.486827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:38.543417 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:38.543453 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:38.558472 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:38.558499 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:38.627234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:38.627308 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:38.627336 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:38.656399 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:38.656481 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:41.188669 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:41.199463 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:41.199550 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:41.223737 1498704 cri.go:89] found id: ""
	I1217 02:06:41.223762 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.223771 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:41.223778 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:41.223842 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:41.248972 1498704 cri.go:89] found id: ""
	I1217 02:06:41.248998 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.249014 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:41.249022 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:41.249084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:41.274840 1498704 cri.go:89] found id: ""
	I1217 02:06:41.274873 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.274886 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:41.274892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:41.274965 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:41.302162 1498704 cri.go:89] found id: ""
	I1217 02:06:41.302188 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.302197 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:41.302204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:41.302274 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:41.331745 1498704 cri.go:89] found id: ""
	I1217 02:06:41.331771 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.331780 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:41.331786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:41.331872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:41.366507 1498704 cri.go:89] found id: ""
	I1217 02:06:41.366538 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.366559 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:41.366567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:41.366642 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:41.402343 1498704 cri.go:89] found id: ""
	I1217 02:06:41.402390 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.402400 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:41.402409 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:41.402482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:41.442142 1498704 cri.go:89] found id: ""
	I1217 02:06:41.442169 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.442177 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:41.442187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:41.442198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:41.498349 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:41.498432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:41.514261 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:41.514287 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:41.577450 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:41.577470 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:41.577483 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:41.602731 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:41.602766 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:43.635083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:45.635371 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:44.138863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:44.149308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:44.149424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:44.175006 1498704 cri.go:89] found id: ""
	I1217 02:06:44.175031 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.175040 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:44.175047 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:44.175103 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:44.199571 1498704 cri.go:89] found id: ""
	I1217 02:06:44.199596 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.199605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:44.199612 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:44.199669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:44.227289 1498704 cri.go:89] found id: ""
	I1217 02:06:44.227313 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.227323 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:44.227329 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:44.227418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:44.255509 1498704 cri.go:89] found id: ""
	I1217 02:06:44.255549 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.255558 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:44.255564 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:44.255622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:44.282827 1498704 cri.go:89] found id: ""
	I1217 02:06:44.282850 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.282858 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:44.282864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:44.282971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:44.310331 1498704 cri.go:89] found id: ""
	I1217 02:06:44.310354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.310363 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:44.310370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:44.310427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:44.334927 1498704 cri.go:89] found id: ""
	I1217 02:06:44.334952 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.334961 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:44.334968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:44.335068 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:44.359119 1498704 cri.go:89] found id: ""
	I1217 02:06:44.359144 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.359153 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:44.359162 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:44.359192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:44.436966 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:44.436987 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:44.437000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:44.462649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:44.462686 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.492091 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:44.492120 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:44.548670 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:44.548707 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.063448 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:47.073962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:47.074076 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:47.100530 1498704 cri.go:89] found id: ""
	I1217 02:06:47.100565 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.100574 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:47.100580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:47.100656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:47.126541 1498704 cri.go:89] found id: ""
	I1217 02:06:47.126573 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.126582 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:47.126589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:47.126657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:47.155783 1498704 cri.go:89] found id: ""
	I1217 02:06:47.155807 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.155816 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:47.155822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:47.155887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:47.183519 1498704 cri.go:89] found id: ""
	I1217 02:06:47.183547 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.183556 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:47.183562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:47.183640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:47.207004 1498704 cri.go:89] found id: ""
	I1217 02:06:47.207029 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.207038 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:47.207044 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:47.207107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:47.236132 1498704 cri.go:89] found id: ""
	I1217 02:06:47.236157 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.236166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:47.236173 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:47.236237 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:47.262428 1498704 cri.go:89] found id: ""
	I1217 02:06:47.262452 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.262460 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:47.262470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:47.262526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:47.291039 1498704 cri.go:89] found id: ""
	I1217 02:06:47.291113 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.291127 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:47.291137 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:47.291154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:47.348423 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:47.348457 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.362973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:47.363001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:47.446529 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:47.446602 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:47.446619 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:47.471848 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:47.471885 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:48.135178 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:50.635159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:50.002430 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:50.016670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:50.016759 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:50.048092 1498704 cri.go:89] found id: ""
	I1217 02:06:50.048116 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.048126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:50.048132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:50.048193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:50.077981 1498704 cri.go:89] found id: ""
	I1217 02:06:50.078006 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.078016 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:50.078023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:50.078084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:50.104799 1498704 cri.go:89] found id: ""
	I1217 02:06:50.104824 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.104833 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:50.104839 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:50.104899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:50.134987 1498704 cri.go:89] found id: ""
	I1217 02:06:50.135010 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.135019 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:50.135025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:50.135088 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:50.163663 1498704 cri.go:89] found id: ""
	I1217 02:06:50.163689 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.163698 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:50.163704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:50.163771 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:50.189331 1498704 cri.go:89] found id: ""
	I1217 02:06:50.189354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.189362 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:50.189369 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:50.189435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:50.214491 1498704 cri.go:89] found id: ""
	I1217 02:06:50.214516 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.214525 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:50.214531 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:50.214590 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:50.238415 1498704 cri.go:89] found id: ""
	I1217 02:06:50.238442 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.238451 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:50.238460 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:50.238472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.269776 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:50.269804 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:50.327018 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:50.327055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:50.341848 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:50.341876 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:50.424429 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:50.424452 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:50.424466 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:52.635229 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:54.635273 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:52.954006 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:52.964727 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:52.964802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:52.989789 1498704 cri.go:89] found id: ""
	I1217 02:06:52.989810 1498704 logs.go:282] 0 containers: []
	W1217 02:06:52.989819 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:52.989826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:52.989887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:53.015439 1498704 cri.go:89] found id: ""
	I1217 02:06:53.015467 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.015476 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:53.015482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:53.015592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:53.040841 1498704 cri.go:89] found id: ""
	I1217 02:06:53.040865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.040875 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:53.040882 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:53.040942 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:53.066349 1498704 cri.go:89] found id: ""
	I1217 02:06:53.066374 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.066383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:53.066389 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:53.066451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:53.091390 1498704 cri.go:89] found id: ""
	I1217 02:06:53.091415 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.091424 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:53.091430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:53.091490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:53.117556 1498704 cri.go:89] found id: ""
	I1217 02:06:53.117581 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.117590 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:53.117597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:53.117683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:53.142385 1498704 cri.go:89] found id: ""
	I1217 02:06:53.142411 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.142421 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:53.142428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:53.142487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:53.167326 1498704 cri.go:89] found id: ""
	I1217 02:06:53.167351 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.167360 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:53.167370 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:53.167410 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:53.169580 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:06:53.227048 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:53.227133 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:53.263335 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:53.263474 1498704 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:06:53.263485 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:53.263548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:53.331925 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:53.331956 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:53.331970 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:53.358423 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:53.358461 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:55.889770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:55.902670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:55.902755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:55.931695 1498704 cri.go:89] found id: ""
	I1217 02:06:55.931717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.931726 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:55.931732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:55.931792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:55.957876 1498704 cri.go:89] found id: ""
	I1217 02:06:55.957898 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.957906 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:55.957913 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:55.957971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:55.985470 1498704 cri.go:89] found id: ""
	I1217 02:06:55.985494 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.985503 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:55.985510 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:55.985569 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:56.012853 1498704 cri.go:89] found id: ""
	I1217 02:06:56.012876 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.012885 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:56.012892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:56.012953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:56.038869 1498704 cri.go:89] found id: ""
	I1217 02:06:56.038896 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.038906 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:56.038912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:56.038974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:56.063896 1498704 cri.go:89] found id: ""
	I1217 02:06:56.063922 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.063931 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:56.063938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:56.063998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:56.094167 1498704 cri.go:89] found id: ""
	I1217 02:06:56.094194 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.094202 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:56.094209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:56.094317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:56.119180 1498704 cri.go:89] found id: ""
	I1217 02:06:56.119203 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.119211 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:56.119220 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:56.119233 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:56.145717 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:56.145755 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:56.174733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:56.174764 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:56.231996 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:56.232031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:56.246270 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:56.246298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:56.310523 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:58.810773 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:58.820984 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:58.821052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:58.844690 1498704 cri.go:89] found id: ""
	I1217 02:06:58.844713 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.844723 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:58.844729 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:58.844789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:58.869040 1498704 cri.go:89] found id: ""
	I1217 02:06:58.869065 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.869074 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:58.869081 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:58.869141 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:58.897937 1498704 cri.go:89] found id: ""
	I1217 02:06:58.897965 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.897974 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:58.897981 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:58.898046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:58.936181 1498704 cri.go:89] found id: ""
	I1217 02:06:58.936206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.936216 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:58.936222 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:58.936284 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:58.961870 1498704 cri.go:89] found id: ""
	I1217 02:06:58.961894 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.961902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:58.961908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:58.961973 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:58.987453 1498704 cri.go:89] found id: ""
	I1217 02:06:58.987476 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.987485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:58.987492 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:58.987589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:59.014256 1498704 cri.go:89] found id: ""
	I1217 02:06:59.014281 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.014290 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:59.014296 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:59.014356 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:59.043181 1498704 cri.go:89] found id: ""
	I1217 02:06:59.043206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.043214 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:59.043224 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:59.043265 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:59.069988 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:59.070014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:59.126583 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:59.126616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:59.143769 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:59.143858 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:59.206336 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:59.206357 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:59.206368 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:59.467894 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:59.526704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.526801 1498704 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:00.964501 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:01.024877 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:01.024990 1498704 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:01.030055 1498704 out.go:179] * Enabled addons: 
	W1217 02:06:57.134604 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:59.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:01.032983 1498704 addons.go:530] duration metric: took 1m40.577449503s for enable addons: enabled=[]
	I1217 02:07:01.732628 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:01.743041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:01.743116 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:01.767462 1498704 cri.go:89] found id: ""
	I1217 02:07:01.767488 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.767497 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:01.767503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:01.767602 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:01.793082 1498704 cri.go:89] found id: ""
	I1217 02:07:01.793104 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.793112 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:01.793119 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:01.793179 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:01.819716 1498704 cri.go:89] found id: ""
	I1217 02:07:01.819740 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.819749 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:01.819755 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:01.819815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:01.847485 1498704 cri.go:89] found id: ""
	I1217 02:07:01.847556 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.847572 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:01.847580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:01.847641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:01.875985 1498704 cri.go:89] found id: ""
	I1217 02:07:01.876062 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.876084 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:01.876103 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:01.876193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:01.910714 1498704 cri.go:89] found id: ""
	I1217 02:07:01.910739 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.910748 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:01.910754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:01.910813 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:01.937846 1498704 cri.go:89] found id: ""
	I1217 02:07:01.937871 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.937880 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:01.937886 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:01.937945 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:01.964067 1498704 cri.go:89] found id: ""
	I1217 02:07:01.964091 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.964100 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:01.964114 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:01.964126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:02.028700 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:02.028724 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:02.028739 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:02.054141 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:02.054180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:02.082544 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:02.082570 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:02.139516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:02.139555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:07:01.635378 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:04.134753 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:06.135163 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:04.654404 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:04.665750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:04.665823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:04.692548 1498704 cri.go:89] found id: ""
	I1217 02:07:04.692573 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.692582 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:04.692589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:04.692649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:04.716945 1498704 cri.go:89] found id: ""
	I1217 02:07:04.716971 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.716980 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:04.716986 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:04.717050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:04.741853 1498704 cri.go:89] found id: ""
	I1217 02:07:04.741919 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.741943 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:04.741956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:04.742029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:04.766368 1498704 cri.go:89] found id: ""
	I1217 02:07:04.766432 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.766456 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:04.766471 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:04.766543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:04.791787 1498704 cri.go:89] found id: ""
	I1217 02:07:04.791811 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.791819 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:04.791826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:04.791886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:04.817229 1498704 cri.go:89] found id: ""
	I1217 02:07:04.817255 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.817264 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:04.817271 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:04.817343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:04.841915 1498704 cri.go:89] found id: ""
	I1217 02:07:04.841938 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.841947 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:04.841953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:04.842013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:04.866862 1498704 cri.go:89] found id: ""
	I1217 02:07:04.866889 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.866898 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:04.866908 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:04.866920 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:04.930507 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:04.930554 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.948025 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:04.948060 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:05.019651 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:05.019675 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:05.019688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:05.046001 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:05.046036 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.578495 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:07.591153 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:07.591225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:07.621427 1498704 cri.go:89] found id: ""
	I1217 02:07:07.621450 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.621459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:07.621466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:07.621526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:07.661892 1498704 cri.go:89] found id: ""
	I1217 02:07:07.661915 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.661923 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:07.661929 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:07.661995 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:07.695665 1498704 cri.go:89] found id: ""
	I1217 02:07:07.695693 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.695703 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:07.695709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:07.695775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:07.721278 1498704 cri.go:89] found id: ""
	I1217 02:07:07.721308 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.721316 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:07.721323 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:07.721381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:07.745368 1498704 cri.go:89] found id: ""
	I1217 02:07:07.745396 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.745404 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:07.745411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:07.745469 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:07.773994 1498704 cri.go:89] found id: ""
	I1217 02:07:07.774017 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.774025 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:07.774032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:07.774094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:07.799025 1498704 cri.go:89] found id: ""
	I1217 02:07:07.799049 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.799058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:07.799070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:07.799128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:07.824235 1498704 cri.go:89] found id: ""
	I1217 02:07:07.824261 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.824270 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:07.824278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:07.824290 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:07.839101 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:07.839129 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:08.135245 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:10.635146 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:07.923334 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:07.923360 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:07.923372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:07.949715 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:07.949754 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.977665 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:07.977690 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.537062 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:10.547797 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:10.547872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:10.572434 1498704 cri.go:89] found id: ""
	I1217 02:07:10.572462 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.572472 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:10.572479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:10.572560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:10.597486 1498704 cri.go:89] found id: ""
	I1217 02:07:10.597510 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.597519 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:10.597525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:10.597591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:10.627205 1498704 cri.go:89] found id: ""
	I1217 02:07:10.627227 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.627236 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:10.627241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:10.627316 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:10.661788 1498704 cri.go:89] found id: ""
	I1217 02:07:10.661815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.661825 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:10.661832 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:10.661892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:10.694378 1498704 cri.go:89] found id: ""
	I1217 02:07:10.694403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.694411 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:10.694417 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:10.694481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:10.719732 1498704 cri.go:89] found id: ""
	I1217 02:07:10.719759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.719768 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:10.719775 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:10.719834 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:10.746071 1498704 cri.go:89] found id: ""
	I1217 02:07:10.746141 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.746169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:10.746181 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:10.746257 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:10.771251 1498704 cri.go:89] found id: ""
	I1217 02:07:10.771324 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.771339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:10.771349 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:10.771363 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:10.797277 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:10.797316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:10.824227 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:10.824255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.883648 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:10.883685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:10.899500 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:10.899545 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:10.971848 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:13.135257 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:15.635347 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:13.472155 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:13.482654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:13.482730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:13.511840 1498704 cri.go:89] found id: ""
	I1217 02:07:13.511865 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.511874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:13.511880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:13.511938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:13.539314 1498704 cri.go:89] found id: ""
	I1217 02:07:13.539340 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.539349 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:13.539355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:13.539418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:13.564523 1498704 cri.go:89] found id: ""
	I1217 02:07:13.564595 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.564616 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:13.564635 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:13.564722 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:13.588672 1498704 cri.go:89] found id: ""
	I1217 02:07:13.588696 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.588705 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:13.588711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:13.588769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:13.613292 1498704 cri.go:89] found id: ""
	I1217 02:07:13.613370 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.613394 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:13.613413 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:13.613497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:13.640379 1498704 cri.go:89] found id: ""
	I1217 02:07:13.640401 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.640467 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:13.640475 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:13.640596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:13.670823 1498704 cri.go:89] found id: ""
	I1217 02:07:13.670897 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.670909 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:13.670915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:13.671033 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:13.697928 1498704 cri.go:89] found id: ""
	I1217 02:07:13.697954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.697963 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:13.697973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:13.697991 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:13.764081 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.764103 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:13.764117 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:13.789698 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:13.789735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:13.817458 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:13.817528 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:13.873570 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:13.873604 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.390490 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:16.400824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:16.400892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:16.433284 1498704 cri.go:89] found id: ""
	I1217 02:07:16.433306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.433315 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:16.433321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:16.433382 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:16.459029 1498704 cri.go:89] found id: ""
	I1217 02:07:16.459051 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.459059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:16.459065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:16.459123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:16.482532 1498704 cri.go:89] found id: ""
	I1217 02:07:16.482559 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.482568 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:16.482574 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:16.482635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:16.508099 1498704 cri.go:89] found id: ""
	I1217 02:07:16.508126 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.508135 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:16.508141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:16.508198 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:16.537293 1498704 cri.go:89] found id: ""
	I1217 02:07:16.537327 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.537336 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:16.537343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:16.537422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:16.561736 1498704 cri.go:89] found id: ""
	I1217 02:07:16.561761 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.561769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:16.561776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:16.561841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:16.588020 1498704 cri.go:89] found id: ""
	I1217 02:07:16.588054 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.588063 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:16.588069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:16.588136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:16.614951 1498704 cri.go:89] found id: ""
	I1217 02:07:16.614983 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.614993 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:16.615018 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:16.615035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:16.674706 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:16.674738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.693871 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:16.694008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:16.761779 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:16.761800 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:16.761813 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:16.788228 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:16.788270 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:18.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:20.135199 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:19.320399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:19.330773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:19.330845 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:19.354921 1498704 cri.go:89] found id: ""
	I1217 02:07:19.354990 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.355015 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:19.355028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:19.355100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:19.383572 1498704 cri.go:89] found id: ""
	I1217 02:07:19.383648 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.383662 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:19.383670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:19.383735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:19.412179 1498704 cri.go:89] found id: ""
	I1217 02:07:19.412204 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.412213 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:19.412229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:19.412290 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:19.437924 1498704 cri.go:89] found id: ""
	I1217 02:07:19.437950 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.437959 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:19.437966 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:19.438057 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:19.462416 1498704 cri.go:89] found id: ""
	I1217 02:07:19.462483 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.462507 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:19.462528 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:19.462618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:19.486955 1498704 cri.go:89] found id: ""
	I1217 02:07:19.487022 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.487047 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:19.487061 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:19.487133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:19.517143 1498704 cri.go:89] found id: ""
	I1217 02:07:19.517170 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.517178 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:19.517185 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:19.517245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:19.541419 1498704 cri.go:89] found id: ""
	I1217 02:07:19.541443 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.541452 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:19.541462 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:19.541474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:19.600586 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:19.600621 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:19.615645 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:19.615673 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:19.700496 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:19.700518 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:19.700531 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:19.725860 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:19.725896 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:22.254753 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:22.266831 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:22.266902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:22.291227 1498704 cri.go:89] found id: ""
	I1217 02:07:22.291306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.291329 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:22.291344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:22.291421 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:22.317812 1498704 cri.go:89] found id: ""
	I1217 02:07:22.317835 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.317844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:22.317850 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:22.317929 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:22.341950 1498704 cri.go:89] found id: ""
	I1217 02:07:22.341973 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.341982 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:22.341991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:22.342074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:22.368217 1498704 cri.go:89] found id: ""
	I1217 02:07:22.368291 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.368330 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:22.368350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:22.368435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:22.396888 1498704 cri.go:89] found id: ""
	I1217 02:07:22.396911 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.396920 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:22.396926 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:22.396987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:22.420964 1498704 cri.go:89] found id: ""
	I1217 02:07:22.421040 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.421064 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:22.421083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:22.421163 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:22.446890 1498704 cri.go:89] found id: ""
	I1217 02:07:22.446954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.446980 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:22.447002 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:22.447067 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:22.475922 1498704 cri.go:89] found id: ""
	I1217 02:07:22.475949 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.475959 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:22.475968 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:22.475980 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:22.532457 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:22.532490 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:22.546823 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:22.546900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:22.612059 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:22.612089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:22.612102 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:22.642268 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:22.642325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:22.635112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:25.134718 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:25.182933 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:25.194033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:25.194115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:25.218403 1498704 cri.go:89] found id: ""
	I1217 02:07:25.218426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.218434 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:25.218441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:25.218500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:25.247233 1498704 cri.go:89] found id: ""
	I1217 02:07:25.247257 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.247267 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:25.247272 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:25.247337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:25.271255 1498704 cri.go:89] found id: ""
	I1217 02:07:25.271278 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.271286 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:25.271292 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:25.271354 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:25.295129 1498704 cri.go:89] found id: ""
	I1217 02:07:25.295152 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.295161 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:25.295167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:25.295232 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:25.323735 1498704 cri.go:89] found id: ""
	I1217 02:07:25.323802 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.323818 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:25.323826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:25.323895 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:25.348083 1498704 cri.go:89] found id: ""
	I1217 02:07:25.348107 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.348116 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:25.348123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:25.348187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:25.375945 1498704 cri.go:89] found id: ""
	I1217 02:07:25.375967 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.375976 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:25.375982 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:25.376046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:25.404167 1498704 cri.go:89] found id: ""
	I1217 02:07:25.404190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.404199 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:25.404207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:25.404219 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.432830 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:25.432905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:25.491437 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:25.491472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:25.506773 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:25.506811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:25.571857 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:25.571879 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:25.571891 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:07:27.634506 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:29.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:28.097148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:28.109420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:28.109492 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:28.147274 1498704 cri.go:89] found id: ""
	I1217 02:07:28.147301 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.147310 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:28.147317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:28.147375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:28.182487 1498704 cri.go:89] found id: ""
	I1217 02:07:28.182520 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.182529 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:28.182535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:28.182605 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:28.210414 1498704 cri.go:89] found id: ""
	I1217 02:07:28.210492 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.210506 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:28.210513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:28.210596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:28.236032 1498704 cri.go:89] found id: ""
	I1217 02:07:28.236067 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.236076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:28.236100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:28.236187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:28.261848 1498704 cri.go:89] found id: ""
	I1217 02:07:28.261925 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.261949 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:28.261961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:28.262023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:28.287575 1498704 cri.go:89] found id: ""
	I1217 02:07:28.287642 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.287667 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:28.287681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:28.287753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:28.311909 1498704 cri.go:89] found id: ""
	I1217 02:07:28.311942 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.311950 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:28.311974 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:28.312055 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:28.338978 1498704 cri.go:89] found id: ""
	I1217 02:07:28.338999 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.339013 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:28.339041 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:28.339059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:28.395245 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:28.395283 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:28.410155 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:28.410183 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:28.473762 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:28.473783 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:28.473807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.499695 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:28.499728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.034443 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:31.045062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:31.045138 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:31.071798 1498704 cri.go:89] found id: ""
	I1217 02:07:31.071825 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.071835 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:31.071842 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:31.071912 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:31.102760 1498704 cri.go:89] found id: ""
	I1217 02:07:31.102787 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.102795 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:31.102802 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:31.102866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:31.141278 1498704 cri.go:89] found id: ""
	I1217 02:07:31.141303 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.141313 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:31.141320 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:31.141385 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:31.171560 1498704 cri.go:89] found id: ""
	I1217 02:07:31.171590 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.171599 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:31.171606 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:31.171671 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:31.198647 1498704 cri.go:89] found id: ""
	I1217 02:07:31.198713 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.198736 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:31.198749 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:31.198822 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:31.223451 1498704 cri.go:89] found id: ""
	I1217 02:07:31.223534 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.223560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:31.223580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:31.223660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:31.253387 1498704 cri.go:89] found id: ""
	I1217 02:07:31.253413 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.253422 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:31.253428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:31.253487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:31.278792 1498704 cri.go:89] found id: ""
	I1217 02:07:31.278815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.278823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:31.278832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:31.278843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:31.303758 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:31.303790 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.332180 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:31.332251 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:31.388186 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:31.388222 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:31.402632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:31.402661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:31.464007 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:32.134594 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:34.135393 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:33.964236 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:33.974724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:33.974801 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:33.997812 1498704 cri.go:89] found id: ""
	I1217 02:07:33.997833 1498704 logs.go:282] 0 containers: []
	W1217 02:07:33.997841 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:33.997847 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:33.997918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:34.028229 1498704 cri.go:89] found id: ""
	I1217 02:07:34.028256 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.028265 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:34.028273 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:34.028333 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:34.053400 1498704 cri.go:89] found id: ""
	I1217 02:07:34.053426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.053437 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:34.053444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:34.053504 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:34.079351 1498704 cri.go:89] found id: ""
	I1217 02:07:34.079419 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.079433 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:34.079441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:34.079499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:34.106192 1498704 cri.go:89] found id: ""
	I1217 02:07:34.106228 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.106237 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:34.106244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:34.106315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:34.147697 1498704 cri.go:89] found id: ""
	I1217 02:07:34.147759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.147785 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:34.147810 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:34.147890 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:34.176177 1498704 cri.go:89] found id: ""
	I1217 02:07:34.176244 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.176268 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:34.176288 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:34.176365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:34.205945 1498704 cri.go:89] found id: ""
	I1217 02:07:34.206007 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.206035 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:34.206056 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:34.206081 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:34.262276 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:34.262309 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:34.276944 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:34.276971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:34.338908 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:34.338934 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:34.338947 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:34.363617 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:34.363647 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:36.891296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:36.902860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:36.902927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:36.930707 1498704 cri.go:89] found id: ""
	I1217 02:07:36.930733 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.930747 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:36.930754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:36.930811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:36.955573 1498704 cri.go:89] found id: ""
	I1217 02:07:36.955597 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.955605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:36.955611 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:36.955668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:36.980409 1498704 cri.go:89] found id: ""
	I1217 02:07:36.980434 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.980444 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:36.980450 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:36.980508 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:37.009442 1498704 cri.go:89] found id: ""
	I1217 02:07:37.009467 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.009477 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:37.009484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:37.009551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:37.037149 1498704 cri.go:89] found id: ""
	I1217 02:07:37.037171 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.037180 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:37.037186 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:37.037250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:37.061767 1498704 cri.go:89] found id: ""
	I1217 02:07:37.061792 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.061801 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:37.061818 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:37.061889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:37.085968 1498704 cri.go:89] found id: ""
	I1217 02:07:37.085993 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.086003 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:37.086009 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:37.086074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:37.115273 1498704 cri.go:89] found id: ""
	I1217 02:07:37.115295 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.115303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:37.115312 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:37.115323 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:37.173190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:37.173223 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:37.190802 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:37.190834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:37.258464 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:37.258486 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:37.258498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:37.283631 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:37.283665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:36.635067 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:38.635141 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:40.635215 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:39.816914 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:39.827386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:39.827463 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:39.852104 1498704 cri.go:89] found id: ""
	I1217 02:07:39.852129 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.852139 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:39.852145 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:39.852204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:39.892785 1498704 cri.go:89] found id: ""
	I1217 02:07:39.892806 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.892815 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:39.892822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:39.892887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:39.923500 1498704 cri.go:89] found id: ""
	I1217 02:07:39.923530 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.923538 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:39.923544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:39.923603 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:39.949968 1498704 cri.go:89] found id: ""
	I1217 02:07:39.949995 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.950004 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:39.950010 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:39.950071 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:39.974479 1498704 cri.go:89] found id: ""
	I1217 02:07:39.974500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.974508 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:39.974515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:39.974572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:40.015259 1498704 cri.go:89] found id: ""
	I1217 02:07:40.015286 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.015296 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:40.015303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:40.015375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:40.045029 1498704 cri.go:89] found id: ""
	I1217 02:07:40.045055 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.045064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:40.045071 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:40.045135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:40.072784 1498704 cri.go:89] found id: ""
	I1217 02:07:40.072818 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.072833 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:40.072843 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:40.072860 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:40.153737 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:40.153765 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:40.153780 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:40.189498 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:40.189552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:40.222768 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:40.222844 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:40.279190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:40.279224 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:42.796231 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:42.806670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:42.806738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:42.830230 1498704 cri.go:89] found id: ""
	I1217 02:07:42.830250 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.830258 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:42.830265 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:42.830323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1217 02:07:43.135159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:45.135226 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:42.855478 1498704 cri.go:89] found id: ""
	I1217 02:07:42.855500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.855509 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:42.855515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:42.855580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:42.894494 1498704 cri.go:89] found id: ""
	I1217 02:07:42.894522 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.894530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:42.894536 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:42.894593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:42.921324 1498704 cri.go:89] found id: ""
	I1217 02:07:42.921350 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.921359 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:42.921365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:42.921435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:42.953266 1498704 cri.go:89] found id: ""
	I1217 02:07:42.953290 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.953299 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:42.953305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:42.953366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:42.977816 1498704 cri.go:89] found id: ""
	I1217 02:07:42.977841 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.977850 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:42.977856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:42.977917 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:43.003747 1498704 cri.go:89] found id: ""
	I1217 02:07:43.003839 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.003865 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:43.003880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:43.003963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:43.029772 1498704 cri.go:89] found id: ""
	I1217 02:07:43.029797 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.029806 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:43.029816 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:43.029828 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:43.055443 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:43.055476 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:43.084076 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:43.084104 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:43.145546 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:43.145607 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:43.161920 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:43.161999 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:43.231831 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:45.733506 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:45.744340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:45.744408 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:45.769934 1498704 cri.go:89] found id: ""
	I1217 02:07:45.769957 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.769965 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:45.769971 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:45.770034 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:45.795238 1498704 cri.go:89] found id: ""
	I1217 02:07:45.795263 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.795272 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:45.795279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:45.795343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:45.821898 1498704 cri.go:89] found id: ""
	I1217 02:07:45.821922 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.821930 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:45.821937 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:45.821999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:45.847109 1498704 cri.go:89] found id: ""
	I1217 02:07:45.847132 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.847140 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:45.847146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:45.847208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:45.880160 1498704 cri.go:89] found id: ""
	I1217 02:07:45.880190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.880199 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:45.880205 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:45.880271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:45.910818 1498704 cri.go:89] found id: ""
	I1217 02:07:45.910850 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.910859 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:45.910866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:45.910927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:45.939378 1498704 cri.go:89] found id: ""
	I1217 02:07:45.939403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.939413 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:45.939419 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:45.939480 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:45.966395 1498704 cri.go:89] found id: ""
	I1217 02:07:45.966421 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.966430 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:45.966440 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:45.966479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:45.981177 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:45.981203 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:46.055154 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:46.055186 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:46.055204 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:46.081781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:46.081822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:46.110247 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:46.110271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:07:47.635175 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:50.134634 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:48.673749 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:48.684117 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:48.684190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:48.710141 1498704 cri.go:89] found id: ""
	I1217 02:07:48.710163 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.710171 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:48.710177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:48.710242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:48.735609 1498704 cri.go:89] found id: ""
	I1217 02:07:48.735631 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.735639 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:48.735648 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:48.735707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:48.760494 1498704 cri.go:89] found id: ""
	I1217 02:07:48.760517 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.760525 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:48.760532 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:48.760592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:48.786553 1498704 cri.go:89] found id: ""
	I1217 02:07:48.786574 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.786582 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:48.786588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:48.786645 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:48.815529 1498704 cri.go:89] found id: ""
	I1217 02:07:48.815551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.815560 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:48.815566 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:48.815623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:48.839528 1498704 cri.go:89] found id: ""
	I1217 02:07:48.839551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.839560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:48.839567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:48.839649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:48.870240 1498704 cri.go:89] found id: ""
	I1217 02:07:48.870266 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.870275 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:48.870282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:48.870363 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:48.906712 1498704 cri.go:89] found id: ""
	I1217 02:07:48.906736 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.906746 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:48.906756 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:48.906786 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:48.934786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:48.934865 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:48.964758 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:48.964785 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:49.022291 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:49.022326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:49.036990 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:49.037025 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:49.101921 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.602715 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.614088 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:51.614167 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:51.640614 1498704 cri.go:89] found id: ""
	I1217 02:07:51.640639 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.640648 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:51.640655 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:51.640716 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:51.665595 1498704 cri.go:89] found id: ""
	I1217 02:07:51.665622 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.665631 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:51.665637 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:51.665727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:51.690508 1498704 cri.go:89] found id: ""
	I1217 02:07:51.690532 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.690541 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:51.690547 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:51.690627 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:51.717537 1498704 cri.go:89] found id: ""
	I1217 02:07:51.717561 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.717570 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:51.717577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:51.717638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:51.742073 1498704 cri.go:89] found id: ""
	I1217 02:07:51.742095 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.742104 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:51.742110 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:51.742169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:51.768165 1498704 cri.go:89] found id: ""
	I1217 02:07:51.768188 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.768234 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:51.768255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:51.768322 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:51.793095 1498704 cri.go:89] found id: ""
	I1217 02:07:51.793118 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.793127 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:51.793133 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:51.793195 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:51.817679 1498704 cri.go:89] found id: ""
	I1217 02:07:51.817701 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.817710 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:51.817720 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:51.817730 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:51.874453 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:51.874486 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:51.890393 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:51.890418 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:51.966182 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.966201 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:51.966214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:51.992382 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:51.992417 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:52.135139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:54.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:54.525060 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:54.535685 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:54.535760 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:54.563912 1498704 cri.go:89] found id: ""
	I1217 02:07:54.563935 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.563944 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:54.563950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:54.564011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:54.588995 1498704 cri.go:89] found id: ""
	I1217 02:07:54.589020 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.589031 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:54.589038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:54.589101 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:54.615173 1498704 cri.go:89] found id: ""
	I1217 02:07:54.615198 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.615207 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:54.615214 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:54.615277 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:54.640498 1498704 cri.go:89] found id: ""
	I1217 02:07:54.640523 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.640532 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:54.640539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:54.640623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:54.666201 1498704 cri.go:89] found id: ""
	I1217 02:07:54.666226 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.666234 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:54.666241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:54.666303 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:54.690876 1498704 cri.go:89] found id: ""
	I1217 02:07:54.690899 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.690908 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:54.690915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:54.690974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:54.714932 1498704 cri.go:89] found id: ""
	I1217 02:07:54.715000 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.715024 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:54.715043 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:54.715133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:54.739880 1498704 cri.go:89] found id: ""
	I1217 02:07:54.739906 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.739926 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:54.739952 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:54.739978 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:54.804035 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:54.804056 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:54.804070 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:54.829994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:54.830030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.858611 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:54.858639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:54.921120 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:54.921196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.438546 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.448669 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:57.448736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:57.475324 1498704 cri.go:89] found id: ""
	I1217 02:07:57.475346 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.475355 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:57.475362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:57.475419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:57.505098 1498704 cri.go:89] found id: ""
	I1217 02:07:57.505123 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.505131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:57.505137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:57.505196 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:57.529496 1498704 cri.go:89] found id: ""
	I1217 02:07:57.529519 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.529529 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:57.529535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:57.529601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:57.560154 1498704 cri.go:89] found id: ""
	I1217 02:07:57.560179 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.560188 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:57.560194 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:57.560256 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:57.584872 1498704 cri.go:89] found id: ""
	I1217 02:07:57.584898 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.584912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:57.584919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:57.584976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:57.611897 1498704 cri.go:89] found id: ""
	I1217 02:07:57.611930 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.611938 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:57.611945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:57.612004 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:57.636969 1498704 cri.go:89] found id: ""
	I1217 02:07:57.636991 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.636999 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:57.637006 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:57.637069 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:57.661285 1498704 cri.go:89] found id: ""
	I1217 02:07:57.661312 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.661320 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:57.661329 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:57.661340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:57.717030 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:57.717066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.732556 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:57.732588 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:57.802383 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:57.802403 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:57.802414 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:57.831640 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:57.831729 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:56.634914 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:58.635189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:01.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:00.359786 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.375104 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:00.375194 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:00.418191 1498704 cri.go:89] found id: ""
	I1217 02:08:00.418222 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.418232 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:00.418239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:00.418315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:00.456739 1498704 cri.go:89] found id: ""
	I1217 02:08:00.456766 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.456775 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:00.456782 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:00.456850 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:00.488069 1498704 cri.go:89] found id: ""
	I1217 02:08:00.488097 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.488106 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:00.488115 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:00.488180 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:00.522338 1498704 cri.go:89] found id: ""
	I1217 02:08:00.522369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.522383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:00.522391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:00.522477 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:00.552999 1498704 cri.go:89] found id: ""
	I1217 02:08:00.553026 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.553035 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:00.553041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:00.553105 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:00.579678 1498704 cri.go:89] found id: ""
	I1217 02:08:00.579710 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.579719 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:00.579725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:00.579787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:00.605680 1498704 cri.go:89] found id: ""
	I1217 02:08:00.605708 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.605717 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:00.605724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:00.605787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:00.632147 1498704 cri.go:89] found id: ""
	I1217 02:08:00.632172 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.632181 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:00.632191 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:00.632202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:00.658405 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:00.658442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.687017 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:00.687042 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:00.743960 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:00.743997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:00.758928 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:00.758957 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:00.826075 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:08:03.634990 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:05.635168 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:03.326352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.337106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:03.337176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:03.362079 1498704 cri.go:89] found id: ""
	I1217 02:08:03.362103 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.362112 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:03.362120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:03.362185 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:03.406055 1498704 cri.go:89] found id: ""
	I1217 02:08:03.406078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.406086 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:03.406092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:03.406153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:03.469689 1498704 cri.go:89] found id: ""
	I1217 02:08:03.469719 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.469728 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:03.469734 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:03.469795 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:03.495363 1498704 cri.go:89] found id: ""
	I1217 02:08:03.495388 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.495397 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:03.495403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:03.495462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:03.520987 1498704 cri.go:89] found id: ""
	I1217 02:08:03.521020 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.521029 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:03.521035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:03.521104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:03.546993 1498704 cri.go:89] found id: ""
	I1217 02:08:03.547070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.547086 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:03.547094 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:03.547157 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:03.572356 1498704 cri.go:89] found id: ""
	I1217 02:08:03.572381 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.572390 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:03.572396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:03.572465 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:03.601007 1498704 cri.go:89] found id: ""
	I1217 02:08:03.601039 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.601048 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:03.601058 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:03.601069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:03.626163 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:03.626198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:03.653854 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:03.653882 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:03.711530 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:03.711566 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:03.726308 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:03.726377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:03.794467 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.296166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.306860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:06.306931 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:06.335081 1498704 cri.go:89] found id: ""
	I1217 02:08:06.335118 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.335128 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:06.335140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:06.335216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:06.360315 1498704 cri.go:89] found id: ""
	I1217 02:08:06.360337 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.360346 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:06.360353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:06.360416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:06.438162 1498704 cri.go:89] found id: ""
	I1217 02:08:06.438184 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.438193 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:06.438201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:06.438260 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:06.473712 1498704 cri.go:89] found id: ""
	I1217 02:08:06.473739 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.473750 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:06.473757 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:06.473821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:06.501185 1498704 cri.go:89] found id: ""
	I1217 02:08:06.501213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.501223 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:06.501229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:06.501291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:06.527618 1498704 cri.go:89] found id: ""
	I1217 02:08:06.527642 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.527650 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:06.527657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:06.527723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:06.551855 1498704 cri.go:89] found id: ""
	I1217 02:08:06.551882 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.551892 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:06.551899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:06.551982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:06.577516 1498704 cri.go:89] found id: ""
	I1217 02:08:06.577547 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.577556 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:06.577566 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:06.577577 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:06.592728 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:06.592762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:06.660537 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.660559 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:06.660572 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:06.685272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:06.685307 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:06.716733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:06.716761 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:07.635213 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:10.134640 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:09.274376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.285055 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:09.285129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:09.310445 1498704 cri.go:89] found id: ""
	I1217 02:08:09.310468 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.310477 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:09.310483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:09.310551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:09.339399 1498704 cri.go:89] found id: ""
	I1217 02:08:09.339434 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.339443 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:09.339449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:09.339539 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:09.364792 1498704 cri.go:89] found id: ""
	I1217 02:08:09.364830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.364843 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:09.364851 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:09.364921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:09.398786 1498704 cri.go:89] found id: ""
	I1217 02:08:09.398813 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.398822 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:09.398829 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:09.398898 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:09.437605 1498704 cri.go:89] found id: ""
	I1217 02:08:09.437633 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.437670 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:09.437696 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:09.437778 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:09.469389 1498704 cri.go:89] found id: ""
	I1217 02:08:09.469430 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.469439 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:09.469446 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:09.469557 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:09.501822 1498704 cri.go:89] found id: ""
	I1217 02:08:09.501847 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.501856 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:09.501873 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:09.501953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:09.526536 1498704 cri.go:89] found id: ""
	I1217 02:08:09.526604 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.526627 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:09.526649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:09.526685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:09.553800 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:09.553829 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.611333 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:09.611367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:09.626057 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:09.626083 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:09.690274 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:09.690296 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:09.690308 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.216656 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.226983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:12.227094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:12.251590 1498704 cri.go:89] found id: ""
	I1217 02:08:12.251613 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.251622 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:12.251628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:12.251686 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:12.276257 1498704 cri.go:89] found id: ""
	I1217 02:08:12.276285 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.276293 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:12.276308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:12.276365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:12.300603 1498704 cri.go:89] found id: ""
	I1217 02:08:12.300628 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.300637 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:12.300643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:12.300704 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:12.328528 1498704 cri.go:89] found id: ""
	I1217 02:08:12.328552 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.328561 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:12.328571 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:12.328629 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:12.353931 1498704 cri.go:89] found id: ""
	I1217 02:08:12.353954 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.353963 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:12.353969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:12.354031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:12.426173 1498704 cri.go:89] found id: ""
	I1217 02:08:12.426238 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.426263 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:12.426283 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:12.426375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:12.463406 1498704 cri.go:89] found id: ""
	I1217 02:08:12.463432 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.463441 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:12.463447 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:12.463511 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:12.491432 1498704 cri.go:89] found id: ""
	I1217 02:08:12.491457 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.491466 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:12.491476 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:12.491487 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:12.549942 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:12.549979 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:12.566124 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:12.566160 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:12.632809 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:12.632878 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:12.632899 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.657969 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:12.658007 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:12.635367 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:14.635409 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:15.189789 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.200614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:15.200684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:15.224844 1498704 cri.go:89] found id: ""
	I1217 02:08:15.224865 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.224874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:15.224880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:15.224939 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:15.253351 1498704 cri.go:89] found id: ""
	I1217 02:08:15.253417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.253441 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:15.253459 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:15.253547 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:15.278140 1498704 cri.go:89] found id: ""
	I1217 02:08:15.278216 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.278238 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:15.278257 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:15.278335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:15.303296 1498704 cri.go:89] found id: ""
	I1217 02:08:15.303325 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.303334 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:15.303340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:15.303399 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:15.332342 1498704 cri.go:89] found id: ""
	I1217 02:08:15.332369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.332379 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:15.332386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:15.332442 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:15.361393 1498704 cri.go:89] found id: ""
	I1217 02:08:15.361417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.361426 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:15.361432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:15.361501 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:15.399309 1498704 cri.go:89] found id: ""
	I1217 02:08:15.399335 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.399343 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:15.399350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:15.399409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:15.441743 1498704 cri.go:89] found id: ""
	I1217 02:08:15.441769 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.441778 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:15.441787 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:15.441799 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:15.508941 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:15.508977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:15.524099 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:15.524127 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:15.595333 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:15.595351 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:15.595367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:15.620921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:15.620958 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:17.135481 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:19.635228 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:18.151199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.162135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:18.162207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:18.190085 1498704 cri.go:89] found id: ""
	I1217 02:08:18.190108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.190116 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:18.190123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:18.190186 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:18.218906 1498704 cri.go:89] found id: ""
	I1217 02:08:18.218930 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.218938 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:18.218944 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:18.219002 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:18.242454 1498704 cri.go:89] found id: ""
	I1217 02:08:18.242476 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.242484 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:18.242490 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:18.242549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:18.267483 1498704 cri.go:89] found id: ""
	I1217 02:08:18.267505 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.267514 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:18.267527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:18.267587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:18.291870 1498704 cri.go:89] found id: ""
	I1217 02:08:18.291894 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.291902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:18.291909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:18.291970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:18.315514 1498704 cri.go:89] found id: ""
	I1217 02:08:18.315543 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.315551 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:18.315558 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:18.315617 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:18.338958 1498704 cri.go:89] found id: ""
	I1217 02:08:18.338980 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.338988 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:18.338995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:18.339052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:18.362300 1498704 cri.go:89] found id: ""
	I1217 02:08:18.362326 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.362339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:18.362349 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:18.362361 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:18.441796 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:18.441881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:18.465294 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:18.465318 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:18.527976 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:18.527999 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:18.528012 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:18.552941 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:18.552971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:21.080554 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.090872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:21.090951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:21.119427 1498704 cri.go:89] found id: ""
	I1217 02:08:21.119451 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.119459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:21.119466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:21.119531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:21.145488 1498704 cri.go:89] found id: ""
	I1217 02:08:21.145509 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.145517 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:21.145524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:21.145589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:21.171795 1498704 cri.go:89] found id: ""
	I1217 02:08:21.171822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.171830 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:21.171837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:21.171897 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:21.200041 1498704 cri.go:89] found id: ""
	I1217 02:08:21.200067 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.200076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:21.200083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:21.200144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:21.224266 1498704 cri.go:89] found id: ""
	I1217 02:08:21.224294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.224302 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:21.224310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:21.224374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:21.249832 1498704 cri.go:89] found id: ""
	I1217 02:08:21.249859 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.249868 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:21.249875 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:21.249934 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:21.276533 1498704 cri.go:89] found id: ""
	I1217 02:08:21.276556 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.276565 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:21.276577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:21.276638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:21.302869 1498704 cri.go:89] found id: ""
	I1217 02:08:21.302898 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.302906 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:21.302920 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:21.302932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:21.359571 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:21.359612 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:21.386971 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:21.387000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:21.481485 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:21.481511 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:21.481523 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:21.510229 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:21.510266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:22.134985 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:24.135180 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:26.135497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:24.042457 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.053742 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:24.053815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:24.079751 1498704 cri.go:89] found id: ""
	I1217 02:08:24.079777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.079793 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:24.079801 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:24.079863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:24.106268 1498704 cri.go:89] found id: ""
	I1217 02:08:24.106294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.106304 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:24.106310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:24.106372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:24.136105 1498704 cri.go:89] found id: ""
	I1217 02:08:24.136127 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.136141 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:24.136147 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:24.136208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:24.162676 1498704 cri.go:89] found id: ""
	I1217 02:08:24.162704 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.162713 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:24.162719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:24.162781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:24.186881 1498704 cri.go:89] found id: ""
	I1217 02:08:24.186909 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.186918 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:24.186924 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:24.186983 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:24.211784 1498704 cri.go:89] found id: ""
	I1217 02:08:24.211807 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.211816 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:24.211823 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:24.211883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:24.239768 1498704 cri.go:89] found id: ""
	I1217 02:08:24.239791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.239799 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:24.239806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:24.239863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:24.267746 1498704 cri.go:89] found id: ""
	I1217 02:08:24.267826 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.267843 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:24.267853 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:24.267864 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:24.292626 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:24.292661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.324726 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:24.324756 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:24.386142 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:24.386184 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:24.417577 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:24.417605 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:24.496974 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:26.997267 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.015470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:27.015561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:27.041572 1498704 cri.go:89] found id: ""
	I1217 02:08:27.041593 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.041601 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:27.041608 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:27.041697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:27.067860 1498704 cri.go:89] found id: ""
	I1217 02:08:27.067884 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.067902 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:27.067923 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:27.068020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:27.091698 1498704 cri.go:89] found id: ""
	I1217 02:08:27.091722 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.091737 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:27.091744 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:27.091804 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:27.116923 1498704 cri.go:89] found id: ""
	I1217 02:08:27.116946 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.116954 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:27.116961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:27.117020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:27.142595 1498704 cri.go:89] found id: ""
	I1217 02:08:27.142619 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.142628 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:27.142634 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:27.142693 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:27.167169 1498704 cri.go:89] found id: ""
	I1217 02:08:27.167195 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.167204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:27.167211 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:27.167271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:27.191350 1498704 cri.go:89] found id: ""
	I1217 02:08:27.191376 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.191384 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:27.191391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:27.191451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:27.216388 1498704 cri.go:89] found id: ""
	I1217 02:08:27.216413 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.216422 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:27.216431 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:27.216442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:27.279861 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:27.279884 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:27.279900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:27.304990 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:27.305027 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:27.333926 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:27.333952 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:27.396365 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:27.396403 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:08:28.635158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:30.635316 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:29.913629 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.924284 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:29.924359 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:29.951846 1498704 cri.go:89] found id: ""
	I1217 02:08:29.951873 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.951882 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:29.951888 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:29.951948 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:29.979680 1498704 cri.go:89] found id: ""
	I1217 02:08:29.979709 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.979718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:29.979724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:29.979783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:30.017361 1498704 cri.go:89] found id: ""
	I1217 02:08:30.017494 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.017508 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:30.017517 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:30.017600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:30.055966 1498704 cri.go:89] found id: ""
	I1217 02:08:30.055994 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.056008 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:30.056015 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:30.056153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:30.086268 1498704 cri.go:89] found id: ""
	I1217 02:08:30.086296 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.086305 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:30.086313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:30.086387 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:30.114436 1498704 cri.go:89] found id: ""
	I1217 02:08:30.114474 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.114485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:30.114493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:30.114563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:30.143104 1498704 cri.go:89] found id: ""
	I1217 02:08:30.143130 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.143140 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:30.143148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:30.143215 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:30.178848 1498704 cri.go:89] found id: ""
	I1217 02:08:30.178912 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.178928 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:30.178939 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:30.178950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:30.235226 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:30.235261 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:30.250400 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:30.250427 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:30.316823 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:30.316843 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:30.316855 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:30.341943 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:30.341985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:33.135099 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:35.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:32.880177 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.891005 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:32.891073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:32.918870 1498704 cri.go:89] found id: ""
	I1217 02:08:32.918896 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.918905 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:32.918912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:32.918970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:32.944098 1498704 cri.go:89] found id: ""
	I1217 02:08:32.944123 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.944132 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:32.944137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:32.944197 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:32.968767 1498704 cri.go:89] found id: ""
	I1217 02:08:32.968791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.968801 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:32.968806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:32.968864 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:32.992596 1498704 cri.go:89] found id: ""
	I1217 02:08:32.992624 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.992632 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:32.992638 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:32.992702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:33.018400 1498704 cri.go:89] found id: ""
	I1217 02:08:33.018424 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.018433 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:33.018439 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:33.018497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:33.043622 1498704 cri.go:89] found id: ""
	I1217 02:08:33.043650 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.043660 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:33.043666 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:33.043728 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:33.068595 1498704 cri.go:89] found id: ""
	I1217 02:08:33.068617 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.068627 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:33.068633 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:33.068695 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:33.097084 1498704 cri.go:89] found id: ""
	I1217 02:08:33.097108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.097117 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:33.097126 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:33.097137 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:33.122964 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:33.123001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:33.151132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:33.151159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:33.206768 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:33.206805 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:33.221251 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:33.221330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:33.289516 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:35.789806 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.800262 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:35.800330 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:35.824823 1498704 cri.go:89] found id: ""
	I1217 02:08:35.824844 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.824852 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:35.824859 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:35.824916 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:35.849352 1498704 cri.go:89] found id: ""
	I1217 02:08:35.849379 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.849388 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:35.849395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:35.849455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:35.873025 1498704 cri.go:89] found id: ""
	I1217 02:08:35.873045 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.873054 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:35.873060 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:35.873123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:35.897548 1498704 cri.go:89] found id: ""
	I1217 02:08:35.897572 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.897581 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:35.897586 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:35.897660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:35.927220 1498704 cri.go:89] found id: ""
	I1217 02:08:35.927283 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.927301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:35.927309 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:35.927374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:35.955050 1498704 cri.go:89] found id: ""
	I1217 02:08:35.955075 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.955083 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:35.955089 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:35.955168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:35.979074 1498704 cri.go:89] found id: ""
	I1217 02:08:35.979144 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.979160 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:35.979167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:35.979228 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:36.005502 1498704 cri.go:89] found id: ""
	I1217 02:08:36.005529 1498704 logs.go:282] 0 containers: []
	W1217 02:08:36.005557 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:36.005568 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:36.005582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:36.022508 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:36.022536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:36.088117 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:36.088139 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:36.088152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:36.112883 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:36.112917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:36.142584 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:36.142610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:37.635249 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:40.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:38.698261 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:38.709807 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:38.709880 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:38.734678 1498704 cri.go:89] found id: ""
	I1217 02:08:38.734703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.734712 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:38.734718 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:38.734777 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:38.764118 1498704 cri.go:89] found id: ""
	I1217 02:08:38.764145 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.764154 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:38.764161 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:38.764223 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:38.792269 1498704 cri.go:89] found id: ""
	I1217 02:08:38.792295 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.792305 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:38.792311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:38.792371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:38.817823 1498704 cri.go:89] found id: ""
	I1217 02:08:38.817845 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.817854 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:38.817861 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:38.817921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:38.846444 1498704 cri.go:89] found id: ""
	I1217 02:08:38.846469 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.846478 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:38.846484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:38.846575 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:38.870805 1498704 cri.go:89] found id: ""
	I1217 02:08:38.870830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.870839 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:38.870845 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:38.870909 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:38.902022 1498704 cri.go:89] found id: ""
	I1217 02:08:38.902047 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.902056 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:38.902063 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:38.902127 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:38.925802 1498704 cri.go:89] found id: ""
	I1217 02:08:38.925831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.925851 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:38.925860 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:38.925871 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.991113 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:38.991154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:39.006019 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:39.006049 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:39.074269 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:39.074328 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:39.074342 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:39.099793 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:39.099827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:41.629026 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.643330 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:41.643411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:41.702722 1498704 cri.go:89] found id: ""
	I1217 02:08:41.702743 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.702752 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:41.702758 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:41.702817 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:41.727343 1498704 cri.go:89] found id: ""
	I1217 02:08:41.727368 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.727377 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:41.727383 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:41.727443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:41.752306 1498704 cri.go:89] found id: ""
	I1217 02:08:41.752331 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.752340 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:41.752346 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:41.752409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:41.777003 1498704 cri.go:89] found id: ""
	I1217 02:08:41.777078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.777101 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:41.777121 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:41.777225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:41.801272 1498704 cri.go:89] found id: ""
	I1217 02:08:41.801298 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.801306 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:41.801313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:41.801371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:41.827046 1498704 cri.go:89] found id: ""
	I1217 02:08:41.827070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.827078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:41.827085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:41.827142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:41.855924 1498704 cri.go:89] found id: ""
	I1217 02:08:41.855956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.855965 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:41.855972 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:41.856042 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:41.882797 1498704 cri.go:89] found id: ""
	I1217 02:08:41.882821 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.882830 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:41.882840 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:41.882856 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:41.897281 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:41.897316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:41.963310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:41.963333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:41.963344 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:41.988494 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:41.988529 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:42.019738 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:42.019770 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:42.135661 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:44.635135 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:44.578521 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.589302 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:44.589376 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:44.614651 1498704 cri.go:89] found id: ""
	I1217 02:08:44.614676 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.614685 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:44.614692 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:44.614755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:44.666392 1498704 cri.go:89] found id: ""
	I1217 02:08:44.666414 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.666422 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:44.666429 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:44.666487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:44.722566 1498704 cri.go:89] found id: ""
	I1217 02:08:44.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.722599 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:44.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:44.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:44.747631 1498704 cri.go:89] found id: ""
	I1217 02:08:44.747656 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.747665 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:44.747671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:44.747730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:44.775719 1498704 cri.go:89] found id: ""
	I1217 02:08:44.775756 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.775765 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:44.775773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:44.775846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:44.801032 1498704 cri.go:89] found id: ""
	I1217 02:08:44.801056 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.801066 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:44.801072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:44.801131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:44.827838 1498704 cri.go:89] found id: ""
	I1217 02:08:44.827872 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.827883 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:44.827890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:44.827961 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:44.852948 1498704 cri.go:89] found id: ""
	I1217 02:08:44.852981 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.852990 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:44.853000 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:44.853011 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.908280 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:44.908314 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:44.923445 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:44.923538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:44.992600 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:44.992624 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:44.992637 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:45.027924 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:45.027975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:47.587759 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.598591 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:47.598664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:47.660378 1498704 cri.go:89] found id: ""
	I1217 02:08:47.660400 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.660408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:47.660414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:47.660472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:47.708467 1498704 cri.go:89] found id: ""
	I1217 02:08:47.708489 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.708498 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:47.708504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:47.708563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:47.733161 1498704 cri.go:89] found id: ""
	I1217 02:08:47.733183 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.733191 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:47.733198 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:47.733264 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:47.759190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.759213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.759222 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:47.759228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:47.759285 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:47.787579 1498704 cri.go:89] found id: ""
	I1217 02:08:47.787601 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.787610 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:47.787616 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:47.787697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:47.816190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.816215 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.816224 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:47.816231 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:47.816323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:47.843534 1498704 cri.go:89] found id: ""
	I1217 02:08:47.843562 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.843572 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:47.843578 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:47.843643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1217 02:08:47.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:49.634635 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:47.867806 1498704 cri.go:89] found id: ""
	I1217 02:08:47.867831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.867841 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:47.867852 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:47.867870 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:47.926619 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:47.926658 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:47.941706 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:47.941734 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:48.009461 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:48.009539 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:48.009561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:48.035273 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:48.035311 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.567421 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.578623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:50.578694 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:50.607374 1498704 cri.go:89] found id: ""
	I1217 02:08:50.607396 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.607405 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.607411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:50.607472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:50.666455 1498704 cri.go:89] found id: ""
	I1217 02:08:50.666484 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.666493 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.666499 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:50.666559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:50.717784 1498704 cri.go:89] found id: ""
	I1217 02:08:50.717822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.717831 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.717838 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:50.717941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:50.748500 1498704 cri.go:89] found id: ""
	I1217 02:08:50.748531 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.748543 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.748550 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:50.748618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:50.774642 1498704 cri.go:89] found id: ""
	I1217 02:08:50.774668 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.774677 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.774683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:50.774742 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:50.803738 1498704 cri.go:89] found id: ""
	I1217 02:08:50.803760 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.803769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.803776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:50.803840 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:50.828145 1498704 cri.go:89] found id: ""
	I1217 02:08:50.828212 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.828238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.828256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:50.828335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:50.853950 1498704 cri.go:89] found id: ""
	I1217 02:08:50.853976 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.853985 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.853995 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.854006 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.910278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.910316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.924980 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.925008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.992234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.992257 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:50.992271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:51.018744 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:51.018778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:52.134591 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:54.134633 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:53.547953 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.558518 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:53.558593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:53.583100 1498704 cri.go:89] found id: ""
	I1217 02:08:53.583125 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.583134 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.583141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:53.583202 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:53.607925 1498704 cri.go:89] found id: ""
	I1217 02:08:53.607948 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.607956 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.607962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:53.608023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:53.657081 1498704 cri.go:89] found id: ""
	I1217 02:08:53.657104 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.657127 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.657135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:53.657208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:53.704278 1498704 cri.go:89] found id: ""
	I1217 02:08:53.704305 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.704313 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.704321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:53.704381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:53.730823 1498704 cri.go:89] found id: ""
	I1217 02:08:53.730851 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.730860 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.730868 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:53.730928 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:53.757094 1498704 cri.go:89] found id: ""
	I1217 02:08:53.757116 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.757125 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.757132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:53.757192 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:53.786671 1498704 cri.go:89] found id: ""
	I1217 02:08:53.786696 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.786705 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.786711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:53.786768 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:53.810935 1498704 cri.go:89] found id: ""
	I1217 02:08:53.810957 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.810966 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.810975 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.810986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.866107 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.866140 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:53.881003 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.881037 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.945396 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.945419 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:53.945432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:53.973428 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.973469 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:56.504673 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.515738 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:56.515816 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:56.540741 1498704 cri.go:89] found id: ""
	I1217 02:08:56.540765 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.540773 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.540780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:56.540846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:56.565810 1498704 cri.go:89] found id: ""
	I1217 02:08:56.565831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.565840 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.565846 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:56.565907 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:56.596074 1498704 cri.go:89] found id: ""
	I1217 02:08:56.596096 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.596105 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.596112 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:56.596173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:56.636207 1498704 cri.go:89] found id: ""
	I1217 02:08:56.636229 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.636238 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.636244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:56.636304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:56.698720 1498704 cri.go:89] found id: ""
	I1217 02:08:56.698749 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.698758 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.698765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:56.698838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:56.732897 1498704 cri.go:89] found id: ""
	I1217 02:08:56.732918 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.732926 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.732933 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:56.732999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:56.762677 1498704 cri.go:89] found id: ""
	I1217 02:08:56.762703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.762712 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.762719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:56.762779 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:56.788307 1498704 cri.go:89] found id: ""
	I1217 02:08:56.788333 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.788342 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.788352 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.788364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.844513 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.844548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.858936 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.858968 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.925270 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.925293 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:56.925305 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:56.951928 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.951967 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:56.634544 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:58.634782 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:01.135356 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:59.483487 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.494825 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:59.494899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:59.520751 1498704 cri.go:89] found id: ""
	I1217 02:08:59.520777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.520785 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.520792 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:59.520851 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:59.546097 1498704 cri.go:89] found id: ""
	I1217 02:08:59.546122 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.546131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.546138 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:59.546205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:59.571525 1498704 cri.go:89] found id: ""
	I1217 02:08:59.571548 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.571556 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.571562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:59.571635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:59.595916 1498704 cri.go:89] found id: ""
	I1217 02:08:59.595944 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.595952 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.595959 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:59.596021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:59.677470 1498704 cri.go:89] found id: ""
	I1217 02:08:59.677497 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.677506 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.677512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:59.677577 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:59.708285 1498704 cri.go:89] found id: ""
	I1217 02:08:59.708311 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.708320 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.708328 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:59.708388 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:59.735444 1498704 cri.go:89] found id: ""
	I1217 02:08:59.735466 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.735474 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.735481 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:59.735551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:59.758934 1498704 cri.go:89] found id: ""
	I1217 02:08:59.758956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.758964 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.758974 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.758985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.786487 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.786513 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.843688 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.843719 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.858632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.858661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.922844 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:59.922867 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:59.922888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.448942 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.459473 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:02.459570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:02.487463 1498704 cri.go:89] found id: ""
	I1217 02:09:02.487486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.487494 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.487529 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:02.487591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:02.516013 1498704 cri.go:89] found id: ""
	I1217 02:09:02.516038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.516047 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.516053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:02.516118 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:02.541783 1498704 cri.go:89] found id: ""
	I1217 02:09:02.541806 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.541814 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.541820 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:02.541876 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:02.566427 1498704 cri.go:89] found id: ""
	I1217 02:09:02.566450 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.566459 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.566465 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:02.566561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:02.590894 1498704 cri.go:89] found id: ""
	I1217 02:09:02.590917 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.590926 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.590932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:02.590998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:02.614645 1498704 cri.go:89] found id: ""
	I1217 02:09:02.614668 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.614677 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.614683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:02.614747 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:02.656626 1498704 cri.go:89] found id: ""
	I1217 02:09:02.656662 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.656671 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.656681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:02.656751 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:02.702753 1498704 cri.go:89] found id: ""
	I1217 02:09:02.702787 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.702796 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.702806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.702817 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.772243 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.772266 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:02.772278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.797608 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.797893 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:02.829032 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.829057 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:03.634729 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:06.135608 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:02.886939 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.886975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.401718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.412408 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:05.412488 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:05.441786 1498704 cri.go:89] found id: ""
	I1217 02:09:05.441821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.441830 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.441837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:05.441908 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:05.466385 1498704 cri.go:89] found id: ""
	I1217 02:09:05.466408 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.466416 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.466422 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:05.466481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:05.491033 1498704 cri.go:89] found id: ""
	I1217 02:09:05.491057 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.491066 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.491072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:05.491131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:05.515650 1498704 cri.go:89] found id: ""
	I1217 02:09:05.515675 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.515684 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.515691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:05.515753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:05.539973 1498704 cri.go:89] found id: ""
	I1217 02:09:05.539996 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.540004 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.540016 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:05.540077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:05.565317 1498704 cri.go:89] found id: ""
	I1217 02:09:05.565338 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.565347 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.565353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:05.565414 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:05.590136 1498704 cri.go:89] found id: ""
	I1217 02:09:05.590161 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.590169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.590176 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:05.590240 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:05.614696 1498704 cri.go:89] found id: ""
	I1217 02:09:05.614733 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.614742 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.614752 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.614762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.682980 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.683022 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.700674 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.700704 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.777617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.777635 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:05.777670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:05.803121 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.803155 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:08.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:10.635438 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:08.332434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.343036 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:08.343108 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:08.367411 1498704 cri.go:89] found id: ""
	I1217 02:09:08.367434 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.367443 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.367449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:08.367517 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:08.391668 1498704 cri.go:89] found id: ""
	I1217 02:09:08.391695 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.391704 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.391712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:08.391775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:08.415929 1498704 cri.go:89] found id: ""
	I1217 02:09:08.415953 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.415961 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.415968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:08.416050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:08.441685 1498704 cri.go:89] found id: ""
	I1217 02:09:08.441755 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.441779 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.441798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:08.441888 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:08.466687 1498704 cri.go:89] found id: ""
	I1217 02:09:08.466713 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.466722 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.466728 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:08.466808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:08.491044 1498704 cri.go:89] found id: ""
	I1217 02:09:08.491069 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.491078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.491085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:08.491190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:08.517483 1498704 cri.go:89] found id: ""
	I1217 02:09:08.517508 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.517517 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.517524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:08.517593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:08.543991 1498704 cri.go:89] found id: ""
	I1217 02:09:08.544017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.544026 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.544035 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.544053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.608510 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.608567 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.642989 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.643026 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.751212 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.751241 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:08.751254 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:08.779142 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.779180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.312760 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.327627 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:11.327714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:11.352557 1498704 cri.go:89] found id: ""
	I1217 02:09:11.352580 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.352588 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.352595 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:11.352654 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:11.378891 1498704 cri.go:89] found id: ""
	I1217 02:09:11.378913 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.378922 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.378928 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:11.378987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:11.403393 1498704 cri.go:89] found id: ""
	I1217 02:09:11.403416 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.403424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.403430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:11.403489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:11.432435 1498704 cri.go:89] found id: ""
	I1217 02:09:11.432459 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.432472 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.432479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:11.432565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:11.458410 1498704 cri.go:89] found id: ""
	I1217 02:09:11.458436 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.458445 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.458451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:11.458510 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:11.484113 1498704 cri.go:89] found id: ""
	I1217 02:09:11.484140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.484149 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.484156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:11.484216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:11.511088 1498704 cri.go:89] found id: ""
	I1217 02:09:11.511112 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.511121 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.511128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:11.511191 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:11.540295 1498704 cri.go:89] found id: ""
	I1217 02:09:11.540324 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.540333 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.540342 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.540354 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.554828 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.554857 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:11.615811 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:11.615835 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:11.615849 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:11.643999 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.644035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.696705 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.696733 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:13.134531 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:14.634797 1494358 node_ready.go:38] duration metric: took 6m0.000749408s for node "no-preload-178365" to be "Ready" ...
	I1217 02:09:14.638073 1494358 out.go:203] 
	W1217 02:09:14.640977 1494358 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:09:14.641013 1494358 out.go:285] * 
	W1217 02:09:14.643229 1494358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:09:14.646121 1494358 out.go:203] 
	I1217 02:09:14.265939 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.276062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:14.276129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:14.301710 1498704 cri.go:89] found id: ""
	I1217 02:09:14.301736 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.301744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.301753 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:14.301811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:14.327085 1498704 cri.go:89] found id: ""
	I1217 02:09:14.327111 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.327119 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.327125 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:14.327182 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:14.351112 1498704 cri.go:89] found id: ""
	I1217 02:09:14.351134 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.351142 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.351148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:14.351208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:14.379796 1498704 cri.go:89] found id: ""
	I1217 02:09:14.379823 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.379833 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.379840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:14.379902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:14.404135 1498704 cri.go:89] found id: ""
	I1217 02:09:14.404158 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.404167 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.404172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:14.404234 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:14.428171 1498704 cri.go:89] found id: ""
	I1217 02:09:14.428194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.428204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.428212 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:14.428272 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:14.455193 1498704 cri.go:89] found id: ""
	I1217 02:09:14.455217 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.455225 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.455232 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:14.455292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:14.479959 1498704 cri.go:89] found id: ""
	I1217 02:09:14.479985 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.479994 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.480003 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.480014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.537013 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.537048 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:14.551864 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:14.551888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:14.616449 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:14.616522 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:14.616551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:14.646206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:14.646248 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.269774 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.280406 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:17.280478 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:17.305501 1498704 cri.go:89] found id: ""
	I1217 02:09:17.305529 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.305537 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.305544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:17.305601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:17.330336 1498704 cri.go:89] found id: ""
	I1217 02:09:17.330361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.330370 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.330377 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:17.330436 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:17.355210 1498704 cri.go:89] found id: ""
	I1217 02:09:17.355235 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.355250 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.355256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:17.355315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:17.380868 1498704 cri.go:89] found id: ""
	I1217 02:09:17.380893 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.380901 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.380908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:17.380968 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:17.406748 1498704 cri.go:89] found id: ""
	I1217 02:09:17.406771 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.406779 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.406785 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:17.406844 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:17.431237 1498704 cri.go:89] found id: ""
	I1217 02:09:17.431263 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.431272 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.431279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:17.431337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:17.455474 1498704 cri.go:89] found id: ""
	I1217 02:09:17.455500 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.455516 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.455523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:17.455586 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:17.479040 1498704 cri.go:89] found id: ""
	I1217 02:09:17.479062 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.479070 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.479079 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:17.479092 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.511305 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.511333 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:17.567635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:17.567672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:17.583863 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:17.583892 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:17.655165 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:17.655185 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:17.655198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.181833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.192614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:20.192732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:20.219176 1498704 cri.go:89] found id: ""
	I1217 02:09:20.219199 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.219208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.219215 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:20.219275 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:20.248198 1498704 cri.go:89] found id: ""
	I1217 02:09:20.248224 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.248233 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.248239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:20.248299 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:20.273332 1498704 cri.go:89] found id: ""
	I1217 02:09:20.273355 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.273363 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.273370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:20.273429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:20.299548 1498704 cri.go:89] found id: ""
	I1217 02:09:20.299621 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.299655 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.299668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:20.299741 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:20.328882 1498704 cri.go:89] found id: ""
	I1217 02:09:20.328911 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.328919 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.328925 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:20.328987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:20.354861 1498704 cri.go:89] found id: ""
	I1217 02:09:20.354887 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.354898 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.354904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:20.354999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:20.380708 1498704 cri.go:89] found id: ""
	I1217 02:09:20.380744 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.380754 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:20.380761 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:20.380833 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:20.410724 1498704 cri.go:89] found id: ""
	I1217 02:09:20.410749 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.410758 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:20.410767 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:20.410778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:20.470014 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:20.470053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:20.484955 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:20.484989 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:20.548617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:20.548637 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:20.548649 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.573994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:20.574030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:23.106211 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.116663 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:23.116732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:23.144995 1498704 cri.go:89] found id: ""
	I1217 02:09:23.145017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.145025 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.145031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:23.145089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:23.172623 1498704 cri.go:89] found id: ""
	I1217 02:09:23.172651 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.172660 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.172668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:23.172727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:23.201388 1498704 cri.go:89] found id: ""
	I1217 02:09:23.201415 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.201424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.201437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:23.201500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:23.225335 1498704 cri.go:89] found id: ""
	I1217 02:09:23.225361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.225370 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.225376 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:23.225433 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:23.251629 1498704 cri.go:89] found id: ""
	I1217 02:09:23.251654 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.251662 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:23.251668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:23.251733 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:23.279092 1498704 cri.go:89] found id: ""
	I1217 02:09:23.279120 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.279129 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:23.279136 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:23.279199 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:23.303104 1498704 cri.go:89] found id: ""
	I1217 02:09:23.303126 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.303134 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:23.303140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:23.303204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:23.327448 1498704 cri.go:89] found id: ""
	I1217 02:09:23.327479 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.327488 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:23.327497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:23.327544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:23.394139 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:23.394186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:23.409933 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:23.409961 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:23.478459 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:23.478484 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:23.478498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:23.503474 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:23.503515 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:26.036615 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.047567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:26.047682 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:26.072876 1498704 cri.go:89] found id: ""
	I1217 02:09:26.072903 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.072912 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.072919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:26.072981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:26.100352 1498704 cri.go:89] found id: ""
	I1217 02:09:26.100378 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.100387 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:26.100392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:26.100450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:26.135848 1498704 cri.go:89] found id: ""
	I1217 02:09:26.135875 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.135884 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:26.135890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:26.135950 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:26.168993 1498704 cri.go:89] found id: ""
	I1217 02:09:26.169020 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.169028 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:26.169035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:26.169094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:26.210553 1498704 cri.go:89] found id: ""
	I1217 02:09:26.210581 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.210590 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:26.210597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:26.210659 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:26.236497 1498704 cri.go:89] found id: ""
	I1217 02:09:26.236526 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.236534 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:26.236541 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:26.236600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:26.261964 1498704 cri.go:89] found id: ""
	I1217 02:09:26.261989 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.261997 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:26.262004 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:26.262090 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:26.288105 1498704 cri.go:89] found id: ""
	I1217 02:09:26.288138 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.288148 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:26.288157 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:26.288168 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:26.343617 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:26.343650 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:26.358285 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:26.358312 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:26.424304 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:26.424327 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:26.424340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:26.450148 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:26.450185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:28.978571 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:28.990745 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:28.990835 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:29.015938 1498704 cri.go:89] found id: ""
	I1217 02:09:29.015962 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.015971 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:29.015977 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:29.016035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:29.041116 1498704 cri.go:89] found id: ""
	I1217 02:09:29.041141 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.041149 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:29.041156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:29.041217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:29.066014 1498704 cri.go:89] found id: ""
	I1217 02:09:29.066036 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.066044 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:29.066051 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:29.066107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:29.090514 1498704 cri.go:89] found id: ""
	I1217 02:09:29.090539 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.090548 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:29.090554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:29.090640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:29.114384 1498704 cri.go:89] found id: ""
	I1217 02:09:29.114405 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.114414 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:29.114420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:29.114506 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:29.143954 1498704 cri.go:89] found id: ""
	I1217 02:09:29.143977 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.143987 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:29.143995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:29.144081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:29.185816 1498704 cri.go:89] found id: ""
	I1217 02:09:29.185839 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.185847 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:29.185864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:29.185941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:29.214738 1498704 cri.go:89] found id: ""
	I1217 02:09:29.214761 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.214770 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:29.214780 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:29.214807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:29.244598 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:29.244623 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:29.300237 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:29.300271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.314809 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:29.314874 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:29.380612 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:29.380633 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:29.380645 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:31.905779 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:31.917874 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:31.917963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:31.946726 1498704 cri.go:89] found id: ""
	I1217 02:09:31.946750 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.946759 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:31.946766 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:31.946829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:31.971653 1498704 cri.go:89] found id: ""
	I1217 02:09:31.971677 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.971685 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:31.971691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:31.971753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:31.999116 1498704 cri.go:89] found id: ""
	I1217 02:09:31.999139 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.999147 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:31.999160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:31.999224 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:32.028438 1498704 cri.go:89] found id: ""
	I1217 02:09:32.028461 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.028470 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:32.028476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:32.028535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:32.053600 1498704 cri.go:89] found id: ""
	I1217 02:09:32.053623 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.053632 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:32.053639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:32.053734 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:32.080000 1498704 cri.go:89] found id: ""
	I1217 02:09:32.080023 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.080032 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:32.080038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:32.080100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:32.105557 1498704 cri.go:89] found id: ""
	I1217 02:09:32.105632 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.105700 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:32.105721 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:32.105814 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:32.142478 1498704 cri.go:89] found id: ""
	I1217 02:09:32.142506 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.142515 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:32.142524 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:32.142536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:32.158591 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:32.158625 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:32.222822 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:32.222896 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:32.222917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:32.248192 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:32.248226 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:32.275127 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:32.275152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:34.830607 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:34.841178 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:34.841251 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:34.866230 1498704 cri.go:89] found id: ""
	I1217 02:09:34.866254 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.866263 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:34.866270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:34.866347 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:34.895167 1498704 cri.go:89] found id: ""
	I1217 02:09:34.895234 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.895251 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:34.895258 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:34.895317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:34.924481 1498704 cri.go:89] found id: ""
	I1217 02:09:34.924521 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.924530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:34.924537 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:34.924608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:34.953744 1498704 cri.go:89] found id: ""
	I1217 02:09:34.953814 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.953830 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:34.953837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:34.953910 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:34.978668 1498704 cri.go:89] found id: ""
	I1217 02:09:34.978735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.978755 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:34.978763 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:34.978823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:35.010506 1498704 cri.go:89] found id: ""
	I1217 02:09:35.010545 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.010554 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:35.010562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:35.010649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:35.037564 1498704 cri.go:89] found id: ""
	I1217 02:09:35.037591 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.037601 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:35.037607 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:35.037720 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:35.063033 1498704 cri.go:89] found id: ""
	I1217 02:09:35.063072 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.063093 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:35.063107 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:35.063123 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:35.119982 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:35.120059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:35.136426 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:35.136504 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:35.210581 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:35.210605 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:35.210617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:35.235901 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:35.235932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:37.769826 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:37.780267 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:37.780361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:37.804770 1498704 cri.go:89] found id: ""
	I1217 02:09:37.804835 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.804858 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:37.804876 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:37.804947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:37.828942 1498704 cri.go:89] found id: ""
	I1217 02:09:37.828981 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.829006 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:37.829019 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:37.829098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:37.856624 1498704 cri.go:89] found id: ""
	I1217 02:09:37.856689 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.856714 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:37.856733 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:37.856808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:37.895741 1498704 cri.go:89] found id: ""
	I1217 02:09:37.895779 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.895789 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:37.895796 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:37.895870 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:37.928762 1498704 cri.go:89] found id: ""
	I1217 02:09:37.928795 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.928804 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:37.928811 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:37.928889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:37.964505 1498704 cri.go:89] found id: ""
	I1217 02:09:37.964530 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.964540 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:37.964557 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:37.964622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:37.990281 1498704 cri.go:89] found id: ""
	I1217 02:09:37.990306 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.990315 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:37.990321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:37.990409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:38.022757 1498704 cri.go:89] found id: ""
	I1217 02:09:38.022789 1498704 logs.go:282] 0 containers: []
	W1217 02:09:38.022799 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:38.022819 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:38.022839 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:38.082781 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:38.082818 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:38.098274 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:38.098303 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:38.181369 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:38.181394 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:38.181408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:38.211421 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:38.211459 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:40.744187 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:40.755584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:40.755657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:40.784265 1498704 cri.go:89] found id: ""
	I1217 02:09:40.784290 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.784299 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:40.784305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:40.784366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:40.812965 1498704 cri.go:89] found id: ""
	I1217 02:09:40.813034 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.813059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:40.813077 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:40.813170 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:40.838108 1498704 cri.go:89] found id: ""
	I1217 02:09:40.838135 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.838144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:40.838150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:40.838218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:40.863761 1498704 cri.go:89] found id: ""
	I1217 02:09:40.863797 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.863806 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:40.863814 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:40.863883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:40.896946 1498704 cri.go:89] found id: ""
	I1217 02:09:40.896973 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.896982 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:40.896990 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:40.897049 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:40.927040 1498704 cri.go:89] found id: ""
	I1217 02:09:40.927067 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.927076 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:40.927083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:40.927142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:40.953843 1498704 cri.go:89] found id: ""
	I1217 02:09:40.953869 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.953878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:40.953885 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:40.953947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:40.983898 1498704 cri.go:89] found id: ""
	I1217 02:09:40.983921 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.983929 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:40.983938 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:40.983950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:41.041172 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:41.041208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:41.056418 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:41.056454 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:41.119760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:41.119832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:41.119859 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:41.148272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:41.148479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.682654 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:43.694991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:43.695064 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:43.722566 1498704 cri.go:89] found id: ""
	I1217 02:09:43.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.722599 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:43.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:43.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:43.747132 1498704 cri.go:89] found id: ""
	I1217 02:09:43.747157 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.747165 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:43.747177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:43.747238 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:43.773465 1498704 cri.go:89] found id: ""
	I1217 02:09:43.773486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.773494 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:43.773500 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:43.773559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:43.798692 1498704 cri.go:89] found id: ""
	I1217 02:09:43.798716 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.798725 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:43.798731 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:43.798796 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:43.825731 1498704 cri.go:89] found id: ""
	I1217 02:09:43.825753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.825762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:43.825768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:43.825827 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:43.855796 1498704 cri.go:89] found id: ""
	I1217 02:09:43.855821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.855829 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:43.855836 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:43.855902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:43.886935 1498704 cri.go:89] found id: ""
	I1217 02:09:43.886960 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.886969 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:43.886975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:43.887035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:43.917934 1498704 cri.go:89] found id: ""
	I1217 02:09:43.917961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.917970 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:43.917979 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:43.917997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.947632 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:43.947659 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:44.003825 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:44.003866 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:44.019941 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:44.019972 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:44.089358 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:44.089380 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:44.089394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.615402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:46.625887 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:46.625979 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:46.650868 1498704 cri.go:89] found id: ""
	I1217 02:09:46.650891 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.650899 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:46.650906 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:46.650966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:46.675004 1498704 cri.go:89] found id: ""
	I1217 02:09:46.675025 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.675033 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:46.675039 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:46.675098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:46.698859 1498704 cri.go:89] found id: ""
	I1217 02:09:46.698880 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.698888 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:46.698899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:46.698966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:46.722103 1498704 cri.go:89] found id: ""
	I1217 02:09:46.722130 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.722139 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:46.722146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:46.722205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:46.749559 1498704 cri.go:89] found id: ""
	I1217 02:09:46.749582 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.749591 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:46.749598 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:46.749681 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:46.775252 1498704 cri.go:89] found id: ""
	I1217 02:09:46.775274 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.775282 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:46.775289 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:46.775368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:46.799706 1498704 cri.go:89] found id: ""
	I1217 02:09:46.799738 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.799747 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:46.799754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:46.799815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:46.825525 1498704 cri.go:89] found id: ""
	I1217 02:09:46.825552 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.825562 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:46.825596 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:46.825616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:46.898518 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:46.898546 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:46.898559 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.924328 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:46.924360 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:46.953287 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:46.953315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:47.008776 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:47.008811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.524226 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:49.535609 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:49.535691 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:49.563709 1498704 cri.go:89] found id: ""
	I1217 02:09:49.563735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.563744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:49.563751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:49.563829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:49.589205 1498704 cri.go:89] found id: ""
	I1217 02:09:49.589229 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.589238 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:49.589245 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:49.589305 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:49.615016 1498704 cri.go:89] found id: ""
	I1217 02:09:49.615038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.615046 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:49.615053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:49.615110 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:49.639299 1498704 cri.go:89] found id: ""
	I1217 02:09:49.639377 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.639407 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:49.639416 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:49.639514 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:49.664056 1498704 cri.go:89] found id: ""
	I1217 02:09:49.664079 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.664087 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:49.664093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:49.664151 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:49.688630 1498704 cri.go:89] found id: ""
	I1217 02:09:49.688652 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.688661 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:49.688667 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:49.688724 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:49.712428 1498704 cri.go:89] found id: ""
	I1217 02:09:49.712447 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.712461 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:49.712467 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:49.712525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:49.736311 1498704 cri.go:89] found id: ""
	I1217 02:09:49.736388 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.736412 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:49.736433 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:49.736473 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:49.792224 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:49.792264 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.806602 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:49.806639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.873760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.873781 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:49.873793 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:49.901849 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:49.901881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.452856 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:52.463628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:52.463707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:52.487769 1498704 cri.go:89] found id: ""
	I1217 02:09:52.487794 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.487802 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:52.487809 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:52.487901 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:52.515989 1498704 cri.go:89] found id: ""
	I1217 02:09:52.516013 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.516022 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:52.516028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:52.516136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:52.542514 1498704 cri.go:89] found id: ""
	I1217 02:09:52.542538 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.542547 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:52.542554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:52.542622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:52.567016 1498704 cri.go:89] found id: ""
	I1217 02:09:52.567050 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.567059 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:52.567067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:52.567129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:52.591935 1498704 cri.go:89] found id: ""
	I1217 02:09:52.591961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.591969 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:52.591975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:52.592035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:52.617548 1498704 cri.go:89] found id: ""
	I1217 02:09:52.617573 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.617583 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:52.617589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:52.617668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:52.642857 1498704 cri.go:89] found id: ""
	I1217 02:09:52.642881 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.642889 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:52.642895 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:52.642952 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:52.666997 1498704 cri.go:89] found id: ""
	I1217 02:09:52.667022 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.667031 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:52.667042 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:52.667055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.736175 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.736198 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:52.736210 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:52.761310 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.761340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.789730 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:52.789758 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:52.846428 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:52.846464 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.363216 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:55.378169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:55.378242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:55.405237 1498704 cri.go:89] found id: ""
	I1217 02:09:55.405262 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.405271 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:55.405277 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:55.405341 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:55.431829 1498704 cri.go:89] found id: ""
	I1217 02:09:55.431852 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.431860 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:55.431866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:55.431924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:55.464126 1498704 cri.go:89] found id: ""
	I1217 02:09:55.464149 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.464157 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:55.464163 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:55.464221 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:55.489098 1498704 cri.go:89] found id: ""
	I1217 02:09:55.489140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.489174 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:55.489188 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:55.489291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:55.514718 1498704 cri.go:89] found id: ""
	I1217 02:09:55.514753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.514762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:55.514768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:55.514828 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:55.538941 1498704 cri.go:89] found id: ""
	I1217 02:09:55.538964 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.538972 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:55.538979 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:55.539040 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:55.564206 1498704 cri.go:89] found id: ""
	I1217 02:09:55.564233 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.564242 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:55.564248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:55.564307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:55.588698 1498704 cri.go:89] found id: ""
	I1217 02:09:55.588722 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.588731 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:55.588740 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:55.588751 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.643314 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.643346 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.657901 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.657933 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.728753 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.728775 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:55.728788 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:55.754781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.754822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:58.282279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:58.292524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:58.292594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:58.320120 1498704 cri.go:89] found id: ""
	I1217 02:09:58.320144 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.320153 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:58.320160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:58.320219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:58.344609 1498704 cri.go:89] found id: ""
	I1217 02:09:58.344634 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.344643 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:58.344649 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:58.344714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:58.371166 1498704 cri.go:89] found id: ""
	I1217 02:09:58.371194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.371203 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:58.371209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:58.371267 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:58.399919 1498704 cri.go:89] found id: ""
	I1217 02:09:58.399947 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.399955 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:58.399961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:58.400029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:58.426746 1498704 cri.go:89] found id: ""
	I1217 02:09:58.426774 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.426783 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:58.426789 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:58.426849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:58.452086 1498704 cri.go:89] found id: ""
	I1217 02:09:58.452164 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.452187 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:58.452202 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:58.452313 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:58.479597 1498704 cri.go:89] found id: ""
	I1217 02:09:58.479640 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.479650 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:58.479657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:58.479735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:58.507631 1498704 cri.go:89] found id: ""
	I1217 02:09:58.507660 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.507668 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.507677 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.507688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.563330 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.563364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.577956 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.577986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.640599 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.640618 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:58.640631 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:58.665542 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.665579 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.193230 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:01.205093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:01.205168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:01.231574 1498704 cri.go:89] found id: ""
	I1217 02:10:01.231657 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.231671 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:01.231679 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:01.231755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:01.258626 1498704 cri.go:89] found id: ""
	I1217 02:10:01.258656 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.258665 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:01.258671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:01.258731 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:01.285028 1498704 cri.go:89] found id: ""
	I1217 02:10:01.285107 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.285130 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:01.285150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:01.285236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:01.311238 1498704 cri.go:89] found id: ""
	I1217 02:10:01.311260 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.311270 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:01.311276 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:01.311337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:01.335915 1498704 cri.go:89] found id: ""
	I1217 02:10:01.335938 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.335946 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.335953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:01.336013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:01.362270 1498704 cri.go:89] found id: ""
	I1217 02:10:01.362299 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.362310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.362317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:01.362386 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:01.389194 1498704 cri.go:89] found id: ""
	I1217 02:10:01.389272 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.389296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.389315 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:01.389404 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:01.425060 1498704 cri.go:89] found id: ""
	I1217 02:10:01.425133 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.425156 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.425178 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.425214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.484970 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.485005 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.500061 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.500089 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.568584 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.568606 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:01.568618 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:01.594966 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.595000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.124707 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:04.138794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:04.138889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:04.192615 1498704 cri.go:89] found id: ""
	I1217 02:10:04.192646 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.192657 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:04.192664 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:04.192738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:04.223099 1498704 cri.go:89] found id: ""
	I1217 02:10:04.223126 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.223135 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.223142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:04.223204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:04.251428 1498704 cri.go:89] found id: ""
	I1217 02:10:04.251451 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.251460 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.251466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:04.251549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:04.277739 1498704 cri.go:89] found id: ""
	I1217 02:10:04.277767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.277778 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.277786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:04.277849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:04.302600 1498704 cri.go:89] found id: ""
	I1217 02:10:04.302625 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.302633 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.302639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:04.302702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:04.328192 1498704 cri.go:89] found id: ""
	I1217 02:10:04.328221 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.328230 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.328237 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:04.328307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:04.354026 1498704 cri.go:89] found id: ""
	I1217 02:10:04.354049 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.354058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.354064 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:04.354125 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:04.387067 1498704 cri.go:89] found id: ""
	I1217 02:10:04.387101 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.387111 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.387140 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:04.387159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:04.420944 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.420981 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.453477 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.453511 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.509779 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.509814 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.525121 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.525151 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.596992 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.097279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.107872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:07.107951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:07.140845 1498704 cri.go:89] found id: ""
	I1217 02:10:07.140873 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.140883 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.140889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:07.140949 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:07.171271 1498704 cri.go:89] found id: ""
	I1217 02:10:07.171293 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.171301 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.171307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:07.171368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:07.199048 1498704 cri.go:89] found id: ""
	I1217 02:10:07.199075 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.199085 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.199092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:07.199152 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:07.223715 1498704 cri.go:89] found id: ""
	I1217 02:10:07.223755 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.223765 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.223771 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:07.223838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:07.250683 1498704 cri.go:89] found id: ""
	I1217 02:10:07.250708 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.250718 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.250724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:07.250783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:07.274541 1498704 cri.go:89] found id: ""
	I1217 02:10:07.274614 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.274627 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.274661 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:07.274752 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:07.298768 1498704 cri.go:89] found id: ""
	I1217 02:10:07.298833 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.298859 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.298872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:07.298944 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:07.322447 1498704 cri.go:89] found id: ""
	I1217 02:10:07.322510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.322534 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.322549 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.322561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.392049 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.392072 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:07.392086 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:07.419785 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.419819 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.448497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.448525 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.505149 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.505186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.022238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.034403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:10.034482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:10.061856 1498704 cri.go:89] found id: ""
	I1217 02:10:10.061882 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.061891 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.061897 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:10.061976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:10.089092 1498704 cri.go:89] found id: ""
	I1217 02:10:10.089118 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.089128 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.089141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:10.089217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:10.115444 1498704 cri.go:89] found id: ""
	I1217 02:10:10.115467 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.115476 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.115482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:10.115579 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:10.142860 1498704 cri.go:89] found id: ""
	I1217 02:10:10.142889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.142897 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.142904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:10.142975 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:10.171034 1498704 cri.go:89] found id: ""
	I1217 02:10:10.171061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.171070 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.171076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:10.171135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:10.201087 1498704 cri.go:89] found id: ""
	I1217 02:10:10.201121 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.201130 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.201137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:10.201206 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:10.227252 1498704 cri.go:89] found id: ""
	I1217 02:10:10.227316 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.227340 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.227353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:10.227429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:10.256814 1498704 cri.go:89] found id: ""
	I1217 02:10:10.256850 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.256859 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.256885 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.256905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.316432 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.316484 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.331782 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.331807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.418862 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.418886 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:10.418898 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:10.447108 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.447142 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:12.978148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.988751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:12.988821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:13.014409 1498704 cri.go:89] found id: ""
	I1217 02:10:13.014435 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.014445 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.014452 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:13.014516 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:13.039697 1498704 cri.go:89] found id: ""
	I1217 02:10:13.039725 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.039734 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.039741 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:13.039830 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:13.063238 1498704 cri.go:89] found id: ""
	I1217 02:10:13.063263 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.063272 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.063279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:13.063337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:13.087932 1498704 cri.go:89] found id: ""
	I1217 02:10:13.087955 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.087964 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.087970 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:13.088029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:13.116779 1498704 cri.go:89] found id: ""
	I1217 02:10:13.116824 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.116833 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.116840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:13.116924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:13.152355 1498704 cri.go:89] found id: ""
	I1217 02:10:13.152379 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.152388 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.152395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:13.152462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:13.178465 1498704 cri.go:89] found id: ""
	I1217 02:10:13.178498 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.178507 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.178513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:13.178597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:13.204065 1498704 cri.go:89] found id: ""
	I1217 02:10:13.204090 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.204099 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.204109 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.204119 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:13.260597 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.260643 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.275806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.275834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.339094 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.339116 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:13.339128 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:13.364711 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.364742 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:15.901294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:15.915207 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:15.915287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:15.944035 1498704 cri.go:89] found id: ""
	I1217 02:10:15.944062 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.944071 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:15.944078 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:15.944142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:15.969105 1498704 cri.go:89] found id: ""
	I1217 02:10:15.969132 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.969142 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:15.969148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:15.969213 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:15.994468 1498704 cri.go:89] found id: ""
	I1217 02:10:15.994495 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.994505 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:15.994511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:15.994576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:16.021869 1498704 cri.go:89] found id: ""
	I1217 02:10:16.021897 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.021907 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.021914 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:16.021981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:16.050208 1498704 cri.go:89] found id: ""
	I1217 02:10:16.050236 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.050245 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.050252 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:16.050319 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:16.076004 1498704 cri.go:89] found id: ""
	I1217 02:10:16.076031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.076041 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.076048 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:16.076159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:16.102446 1498704 cri.go:89] found id: ""
	I1217 02:10:16.102526 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.102550 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.102563 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:16.102643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:16.134280 1498704 cri.go:89] found id: ""
	I1217 02:10:16.134306 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.134315 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.134325 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.134362 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:16.173187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.173220 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.231927 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.231960 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.247063 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.247093 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.315647 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.315668 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:16.315681 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:18.841379 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:18.852146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:18.852219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:18.877675 1498704 cri.go:89] found id: ""
	I1217 02:10:18.877750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.877765 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:18.877773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:18.877839 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:18.903447 1498704 cri.go:89] found id: ""
	I1217 02:10:18.903482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.903491 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:18.903498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:18.903576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:18.929561 1498704 cri.go:89] found id: ""
	I1217 02:10:18.929588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.929597 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:18.929604 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:18.929683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:18.955239 1498704 cri.go:89] found id: ""
	I1217 02:10:18.955333 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.955350 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:18.955358 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:18.955424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:18.979922 1498704 cri.go:89] found id: ""
	I1217 02:10:18.979953 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.979962 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:18.979968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:18.980035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:19.007041 1498704 cri.go:89] found id: ""
	I1217 02:10:19.007077 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.007087 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.007093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:19.007177 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:19.035426 1498704 cri.go:89] found id: ""
	I1217 02:10:19.035450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.035459 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.035466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:19.035542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:19.060135 1498704 cri.go:89] found id: ""
	I1217 02:10:19.060159 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.060167 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.060200 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.060217 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.116693 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.116728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.134579 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.134610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.216066 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.216089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:19.216105 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:19.242169 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.242202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:21.771406 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:21.782951 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:21.783026 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:21.809728 1498704 cri.go:89] found id: ""
	I1217 02:10:21.809750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.809758 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:21.809765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:21.809824 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:21.841207 1498704 cri.go:89] found id: ""
	I1217 02:10:21.841233 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.841242 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:21.841248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:21.841307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:21.868982 1498704 cri.go:89] found id: ""
	I1217 02:10:21.869008 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.869017 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:21.869023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:21.869102 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:21.895994 1498704 cri.go:89] found id: ""
	I1217 02:10:21.896030 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.896040 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:21.896046 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:21.896117 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:21.927675 1498704 cri.go:89] found id: ""
	I1217 02:10:21.927767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.927786 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:21.927798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:21.927886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:21.956133 1498704 cri.go:89] found id: ""
	I1217 02:10:21.956157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.956166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:21.956172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:21.956235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:21.987411 1498704 cri.go:89] found id: ""
	I1217 02:10:21.987442 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.987451 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:21.987458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:21.987528 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:22.018001 1498704 cri.go:89] found id: ""
	I1217 02:10:22.018031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:22.018041 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.018058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.018072 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.077509 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.077544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:22.094048 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.094152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.179483 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.179527 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:22.179552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:22.208002 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.208053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.745839 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:24.756980 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:24.757073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:24.781924 1498704 cri.go:89] found id: ""
	I1217 02:10:24.781947 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.781955 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:24.781962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:24.782022 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:24.807686 1498704 cri.go:89] found id: ""
	I1217 02:10:24.807709 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.807718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:24.807725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:24.807785 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:24.833146 1498704 cri.go:89] found id: ""
	I1217 02:10:24.833177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.833197 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:24.833204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:24.833268 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:24.859474 1498704 cri.go:89] found id: ""
	I1217 02:10:24.859496 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.859505 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:24.859523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:24.859585 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:24.885498 1498704 cri.go:89] found id: ""
	I1217 02:10:24.885523 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.885532 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:24.885549 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:24.885608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:24.910357 1498704 cri.go:89] found id: ""
	I1217 02:10:24.910394 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.910403 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:24.910410 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:24.910487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:24.935548 1498704 cri.go:89] found id: ""
	I1217 02:10:24.935572 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.935581 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:24.935588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:24.935650 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:24.961748 1498704 cri.go:89] found id: ""
	I1217 02:10:24.961774 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.961813 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:24.961831 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:24.961852 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.989413 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:24.989488 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.046752 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.046797 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.074232 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.074268 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.166951 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.166980 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:25.166994 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.699737 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:27.710317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:27.710401 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:27.735667 1498704 cri.go:89] found id: ""
	I1217 02:10:27.735694 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.735703 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:27.735709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:27.735770 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:27.764035 1498704 cri.go:89] found id: ""
	I1217 02:10:27.764061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.764070 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:27.764076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:27.764136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:27.788237 1498704 cri.go:89] found id: ""
	I1217 02:10:27.788265 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.788273 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:27.788280 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:27.788340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:27.815686 1498704 cri.go:89] found id: ""
	I1217 02:10:27.815714 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.815723 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:27.815730 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:27.815792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:27.846482 1498704 cri.go:89] found id: ""
	I1217 02:10:27.846510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.846518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:27.846525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:27.846584 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:27.871189 1498704 cri.go:89] found id: ""
	I1217 02:10:27.871217 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.871227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:27.871233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:27.871292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:27.899034 1498704 cri.go:89] found id: ""
	I1217 02:10:27.899056 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.899064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:27.899070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:27.899128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:27.923014 1498704 cri.go:89] found id: ""
	I1217 02:10:27.923037 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.923046 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:27.923055 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:27.923066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.948254 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:27.948289 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:27.978557 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:27.978582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.033709 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.033748 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:28.049287 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:28.049315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:28.120598 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.621228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:30.633415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:30.633544 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:30.660114 1498704 cri.go:89] found id: ""
	I1217 02:10:30.660186 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.660208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:30.660228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:30.660315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:30.687423 1498704 cri.go:89] found id: ""
	I1217 02:10:30.687450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.687459 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:30.687466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:30.687542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:30.712536 1498704 cri.go:89] found id: ""
	I1217 02:10:30.712568 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.712577 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:30.712584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:30.712658 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:30.736913 1498704 cri.go:89] found id: ""
	I1217 02:10:30.736983 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.737007 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:30.737025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:30.737115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:30.761778 1498704 cri.go:89] found id: ""
	I1217 02:10:30.761852 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.761875 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:30.761889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:30.761963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:30.789829 1498704 cri.go:89] found id: ""
	I1217 02:10:30.789854 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.789863 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:30.789869 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:30.789930 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:30.815268 1498704 cri.go:89] found id: ""
	I1217 02:10:30.815296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.815304 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:30.815311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:30.815373 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:30.839769 1498704 cri.go:89] found id: ""
	I1217 02:10:30.839793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.839802 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:30.839811 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:30.839823 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:30.854187 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:30.854216 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:30.917680 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.917706 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:30.917718 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:30.943267 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:30.943300 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:30.970294 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:30.970374 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.525981 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:33.536356 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:33.536427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:33.561187 1498704 cri.go:89] found id: ""
	I1217 02:10:33.561210 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.561219 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:33.561225 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:33.561287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:33.589979 1498704 cri.go:89] found id: ""
	I1217 02:10:33.590002 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.590012 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:33.590023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:33.590082 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:33.615543 1498704 cri.go:89] found id: ""
	I1217 02:10:33.615567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.615576 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:33.615583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:33.615644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:33.648052 1498704 cri.go:89] found id: ""
	I1217 02:10:33.648080 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.648089 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:33.648095 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:33.648162 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:33.676343 1498704 cri.go:89] found id: ""
	I1217 02:10:33.676376 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.676386 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:33.676392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:33.676459 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:33.707262 1498704 cri.go:89] found id: ""
	I1217 02:10:33.707338 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.707353 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:33.707359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:33.707419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:33.732853 1498704 cri.go:89] found id: ""
	I1217 02:10:33.732920 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.732945 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:33.732963 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:33.733053 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:33.757542 1498704 cri.go:89] found id: ""
	I1217 02:10:33.757567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.757576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:33.757585 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:33.757596 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:33.821758 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:33.821777 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:33.821791 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:33.846519 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:33.846555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:33.873755 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:33.873782 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.930246 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:33.930282 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.445766 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:36.456503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:36.456576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:36.483872 1498704 cri.go:89] found id: ""
	I1217 02:10:36.483894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.483903 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:36.483909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:36.483970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:36.508742 1498704 cri.go:89] found id: ""
	I1217 02:10:36.508765 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.508774 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:36.508780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:36.508838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:36.535472 1498704 cri.go:89] found id: ""
	I1217 02:10:36.535511 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.535520 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:36.535527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:36.535591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:36.566274 1498704 cri.go:89] found id: ""
	I1217 02:10:36.566296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.566305 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:36.566311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:36.566372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:36.590882 1498704 cri.go:89] found id: ""
	I1217 02:10:36.590904 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.590912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:36.590918 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:36.590977 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:36.614768 1498704 cri.go:89] found id: ""
	I1217 02:10:36.614793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.614802 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:36.614808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:36.614889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:36.643752 1498704 cri.go:89] found id: ""
	I1217 02:10:36.643778 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.643787 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:36.643794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:36.643857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:36.672151 1498704 cri.go:89] found id: ""
	I1217 02:10:36.672177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.672186 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:36.672194 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:36.672208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:36.733511 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:36.733544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.752180 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:36.752255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:36.815443 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:36.815465 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:36.815478 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:36.840305 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:36.840349 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:39.373770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:39.386294 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:39.386380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:39.420073 1498704 cri.go:89] found id: ""
	I1217 02:10:39.420117 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.420126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:39.420132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:39.420210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:39.454303 1498704 cri.go:89] found id: ""
	I1217 02:10:39.454327 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.454338 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:39.454344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:39.454402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:39.483117 1498704 cri.go:89] found id: ""
	I1217 02:10:39.483143 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.483152 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:39.483159 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:39.483236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:39.507851 1498704 cri.go:89] found id: ""
	I1217 02:10:39.507927 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.507942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:39.507949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:39.508011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:39.535318 1498704 cri.go:89] found id: ""
	I1217 02:10:39.535344 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.535353 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:39.535359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:39.535460 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:39.559510 1498704 cri.go:89] found id: ""
	I1217 02:10:39.559587 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.559602 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:39.559610 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:39.559670 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:39.588446 1498704 cri.go:89] found id: ""
	I1217 02:10:39.588477 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.588487 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:39.588493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:39.588597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:39.616016 1498704 cri.go:89] found id: ""
	I1217 02:10:39.616041 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.616049 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:39.616058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:39.616069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:39.678516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:39.678553 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:39.698413 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:39.698440 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:39.766310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:39.766333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:39.766347 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:39.791602 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:39.791641 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:42.319919 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:42.330880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:42.330962 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:42.355776 1498704 cri.go:89] found id: ""
	I1217 02:10:42.355798 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.355807 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:42.355813 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:42.355872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:42.393050 1498704 cri.go:89] found id: ""
	I1217 02:10:42.393084 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.393093 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:42.393100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:42.393159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:42.426120 1498704 cri.go:89] found id: ""
	I1217 02:10:42.426157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.426166 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:42.426174 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:42.426245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:42.456881 1498704 cri.go:89] found id: ""
	I1217 02:10:42.456917 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.456926 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:42.456932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:42.456999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:42.481272 1498704 cri.go:89] found id: ""
	I1217 02:10:42.481298 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.481307 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:42.481312 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:42.481372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:42.506468 1498704 cri.go:89] found id: ""
	I1217 02:10:42.506497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.506506 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:42.506512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:42.506572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:42.531395 1498704 cri.go:89] found id: ""
	I1217 02:10:42.531460 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.531476 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:42.531484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:42.531552 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:42.555791 1498704 cri.go:89] found id: ""
	I1217 02:10:42.555814 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.555822 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:42.555831 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:42.555843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:42.611764 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:42.611800 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:42.627436 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:42.627463 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:42.717562 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:42.717584 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:42.717597 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:42.742727 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:42.742763 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:45.269723 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:45.281660 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:45.281736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:45.307916 1498704 cri.go:89] found id: ""
	I1217 02:10:45.307941 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.307950 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:45.307956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:45.308021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:45.337837 1498704 cri.go:89] found id: ""
	I1217 02:10:45.337862 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.337871 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:45.337878 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:45.337943 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:45.382867 1498704 cri.go:89] found id: ""
	I1217 02:10:45.382894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.382903 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:45.382909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:45.382970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:45.424600 1498704 cri.go:89] found id: ""
	I1217 02:10:45.424629 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.424637 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:45.424644 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:45.424707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:45.456469 1498704 cri.go:89] found id: ""
	I1217 02:10:45.456497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.456505 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:45.456511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:45.456574 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:45.482345 1498704 cri.go:89] found id: ""
	I1217 02:10:45.482370 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.482378 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:45.482385 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:45.482450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:45.507901 1498704 cri.go:89] found id: ""
	I1217 02:10:45.507930 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.507948 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:45.507955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:45.508065 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:45.532875 1498704 cri.go:89] found id: ""
	I1217 02:10:45.532896 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.532904 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:45.532913 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:45.532924 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:45.589239 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:45.589273 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:45.604011 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:45.604045 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:45.695710 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:45.695788 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:45.695808 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:45.721274 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:45.721310 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.251294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:48.261750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.261825 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.286414 1498704 cri.go:89] found id: ""
	I1217 02:10:48.286441 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.286450 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.286457 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.286515 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.315314 1498704 cri.go:89] found id: ""
	I1217 02:10:48.315336 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.315344 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.315351 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.315411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.339435 1498704 cri.go:89] found id: ""
	I1217 02:10:48.339461 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.339469 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.339476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.339543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.363969 1498704 cri.go:89] found id: ""
	I1217 02:10:48.364045 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.364061 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.364069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.364134 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.391387 1498704 cri.go:89] found id: ""
	I1217 02:10:48.391409 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.391418 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.391425 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.391489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.422985 1498704 cri.go:89] found id: ""
	I1217 02:10:48.423006 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.423014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.423021 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.423081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.451561 1498704 cri.go:89] found id: ""
	I1217 02:10:48.451588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.451598 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.451605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:48.451667 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:48.477573 1498704 cri.go:89] found id: ""
	I1217 02:10:48.477597 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.477607 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:48.477616 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:48.477627 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:48.503190 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:48.503227 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.531901 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:48.531927 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:48.590637 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:48.590670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:48.606410 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:48.606441 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:48.698001 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.198775 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:51.210128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:51.210207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:51.239455 1498704 cri.go:89] found id: ""
	I1217 02:10:51.239482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.239491 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:51.239504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:51.239587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:51.265468 1498704 cri.go:89] found id: ""
	I1217 02:10:51.265541 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.265565 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:51.265583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:51.265684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:51.290269 1498704 cri.go:89] found id: ""
	I1217 02:10:51.290294 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.290303 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:51.290310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:51.290403 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:51.315672 1498704 cri.go:89] found id: ""
	I1217 02:10:51.315697 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.315706 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:51.315712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:51.315775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:51.345852 1498704 cri.go:89] found id: ""
	I1217 02:10:51.345922 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.345938 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:51.345945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:51.346021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:51.374855 1498704 cri.go:89] found id: ""
	I1217 02:10:51.374884 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.374892 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:51.374899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:51.374967 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:51.408516 1498704 cri.go:89] found id: ""
	I1217 02:10:51.408553 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.408563 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:51.408569 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:51.408636 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:51.443401 1498704 cri.go:89] found id: ""
	I1217 02:10:51.443428 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.443436 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:51.443445 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:51.443474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:51.499872 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:51.499907 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:51.514690 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:51.514759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:51.581421 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.581455 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:51.581470 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:51.606921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:51.606964 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.151396 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:54.162403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:54.162479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:54.188307 1498704 cri.go:89] found id: ""
	I1217 02:10:54.188331 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.188340 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:54.188347 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:54.188411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:54.222781 1498704 cri.go:89] found id: ""
	I1217 02:10:54.222803 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.222818 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:54.222824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:54.222886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:54.251344 1498704 cri.go:89] found id: ""
	I1217 02:10:54.251415 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.251439 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:54.251451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:54.251535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:54.280867 1498704 cri.go:89] found id: ""
	I1217 02:10:54.280889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.280898 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:54.280904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:54.280966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:54.306150 1498704 cri.go:89] found id: ""
	I1217 02:10:54.306177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.306185 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:54.306192 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:54.306250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:54.330272 1498704 cri.go:89] found id: ""
	I1217 02:10:54.330296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.330310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:54.330317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:54.330375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:54.359393 1498704 cri.go:89] found id: ""
	I1217 02:10:54.359423 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.359431 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:54.359438 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:54.359525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:54.392745 1498704 cri.go:89] found id: ""
	I1217 02:10:54.392780 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.392804 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:54.392822 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:54.392835 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:54.469149 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:54.469171 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:54.469185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:54.495699 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:54.495738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.524004 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:54.524031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:54.579558 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:54.579592 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.095655 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:57.106067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:57.106145 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:57.130932 1498704 cri.go:89] found id: ""
	I1217 02:10:57.130961 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.130970 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:57.130976 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:57.131046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:57.160073 1498704 cri.go:89] found id: ""
	I1217 02:10:57.160098 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.160107 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:57.160113 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:57.160173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:57.184768 1498704 cri.go:89] found id: ""
	I1217 02:10:57.184793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.184802 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:57.184808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:57.184867 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:57.210332 1498704 cri.go:89] found id: ""
	I1217 02:10:57.210358 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.210367 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:57.210374 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:57.210457 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:57.234920 1498704 cri.go:89] found id: ""
	I1217 02:10:57.234984 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.234999 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:57.235007 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:57.235072 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:57.260151 1498704 cri.go:89] found id: ""
	I1217 02:10:57.260183 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.260193 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:57.260201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:57.260310 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:57.287966 1498704 cri.go:89] found id: ""
	I1217 02:10:57.288000 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.288009 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:57.288032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:57.288115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:57.312191 1498704 cri.go:89] found id: ""
	I1217 02:10:57.312252 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.312284 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:57.312306 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:57.312330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:57.344168 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:57.344196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:57.400635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:57.400672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.416567 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:57.416594 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:57.485990 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:57.486013 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:57.486028 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.011650 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:00.083065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:00.083205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:00.177092 1498704 cri.go:89] found id: ""
	I1217 02:11:00.177120 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.177129 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:00.177137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:00.177210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:00.240557 1498704 cri.go:89] found id: ""
	I1217 02:11:00.240645 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.240670 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:00.240689 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:00.240818 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:00.290983 1498704 cri.go:89] found id: ""
	I1217 02:11:00.291075 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.291101 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:00.291120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:00.291245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:00.339816 1498704 cri.go:89] found id: ""
	I1217 02:11:00.339906 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.339935 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:00.339955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:00.340060 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:00.400482 1498704 cri.go:89] found id: ""
	I1217 02:11:00.400508 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.400516 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:00.400525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:00.400594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:00.437316 1498704 cri.go:89] found id: ""
	I1217 02:11:00.437386 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.437413 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:00.437432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:00.437531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:00.464791 1498704 cri.go:89] found id: ""
	I1217 02:11:00.464859 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.464881 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:00.464899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:00.464986 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:00.492400 1498704 cri.go:89] found id: ""
	I1217 02:11:00.492468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.492492 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:00.492514 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:00.492551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:00.549202 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:00.549237 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:00.564046 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:00.564073 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:00.636379 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:00.636409 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:00.636423 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.666039 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:00.666076 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.197992 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:03.209540 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:03.209610 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:03.237337 1498704 cri.go:89] found id: ""
	I1217 02:11:03.237411 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.237436 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:03.237458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:03.237545 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:03.262191 1498704 cri.go:89] found id: ""
	I1217 02:11:03.262213 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.262221 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:03.262228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:03.262286 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:03.286816 1498704 cri.go:89] found id: ""
	I1217 02:11:03.286840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.286850 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:03.286856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:03.286915 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:03.310933 1498704 cri.go:89] found id: ""
	I1217 02:11:03.311007 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.311023 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:03.311031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:03.311089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:03.334605 1498704 cri.go:89] found id: ""
	I1217 02:11:03.334628 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.334637 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:03.334643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:03.334701 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:03.359646 1498704 cri.go:89] found id: ""
	I1217 02:11:03.359681 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.359690 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:03.359697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:03.359789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:03.391919 1498704 cri.go:89] found id: ""
	I1217 02:11:03.391946 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.391955 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:03.391962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:03.392025 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:03.419543 1498704 cri.go:89] found id: ""
	I1217 02:11:03.419567 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.419576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:03.419586 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:03.419600 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.455897 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:03.455925 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:03.512216 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:03.512255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:03.527344 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:03.527372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:03.591374 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:03.591396 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:03.591408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.117735 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:06.128394 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:06.128466 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:06.155397 1498704 cri.go:89] found id: ""
	I1217 02:11:06.155420 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.155430 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:06.155436 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:06.155669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:06.185554 1498704 cri.go:89] found id: ""
	I1217 02:11:06.185631 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.185682 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:06.185697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:06.185769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:06.214540 1498704 cri.go:89] found id: ""
	I1217 02:11:06.214564 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.214573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:06.214579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:06.214637 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:06.240468 1498704 cri.go:89] found id: ""
	I1217 02:11:06.240492 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.240501 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:06.240507 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:06.240570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:06.266674 1498704 cri.go:89] found id: ""
	I1217 02:11:06.266697 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.266706 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:06.266712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:06.266781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:06.292194 1498704 cri.go:89] found id: ""
	I1217 02:11:06.292218 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.292227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:06.292233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:06.292295 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:06.320979 1498704 cri.go:89] found id: ""
	I1217 02:11:06.321002 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.321011 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:06.321017 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:06.321074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:06.347269 1498704 cri.go:89] found id: ""
	I1217 02:11:06.347294 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.347303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:06.347315 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:06.347326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:06.409046 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:06.409101 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:06.425379 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:06.425406 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.490322 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.490345 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:06.490357 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.515786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:06.515825 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.043785 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:09.054506 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:09.054580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:09.079819 1498704 cri.go:89] found id: ""
	I1217 02:11:09.079848 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.079856 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:09.079862 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:09.079921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:09.104928 1498704 cri.go:89] found id: ""
	I1217 02:11:09.104953 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.104963 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:09.104969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:09.105031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:09.130212 1498704 cri.go:89] found id: ""
	I1217 02:11:09.130238 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.130246 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:09.130255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:09.130358 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:09.159130 1498704 cri.go:89] found id: ""
	I1217 02:11:09.159153 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.159162 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:09.159169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:09.159245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:09.184267 1498704 cri.go:89] found id: ""
	I1217 02:11:09.184292 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.184301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:09.184307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:09.184371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:09.209170 1498704 cri.go:89] found id: ""
	I1217 02:11:09.209195 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.209204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:09.209210 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:09.209271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:09.235842 1498704 cri.go:89] found id: ""
	I1217 02:11:09.235869 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.235878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:09.235884 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:09.235946 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:09.265413 1498704 cri.go:89] found id: ""
	I1217 02:11:09.265445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.265454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:09.265463 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.265475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.302759 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:09.302784 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:09.358361 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:09.358394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:09.378248 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:09.378278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.451227 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.451247 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:09.451260 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:11.977784 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.988725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:11.988798 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:12.015755 1498704 cri.go:89] found id: ""
	I1217 02:11:12.015778 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.015788 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:12.015795 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:12.015866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:12.042225 1498704 cri.go:89] found id: ""
	I1217 02:11:12.042250 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.042259 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:12.042269 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:12.042328 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:12.067951 1498704 cri.go:89] found id: ""
	I1217 02:11:12.067977 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.067987 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:12.067993 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:12.068054 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:12.094539 1498704 cri.go:89] found id: ""
	I1217 02:11:12.094565 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.094574 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:12.094580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:12.094641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:12.120422 1498704 cri.go:89] found id: ""
	I1217 02:11:12.120445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.120454 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:12.120461 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:12.120521 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:12.146437 1498704 cri.go:89] found id: ""
	I1217 02:11:12.146465 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.146491 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:12.146498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:12.146560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:12.171817 1498704 cri.go:89] found id: ""
	I1217 02:11:12.171840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.171849 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:12.171855 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:12.171914 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:12.200987 1498704 cri.go:89] found id: ""
	I1217 02:11:12.201013 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.201022 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:12.201031 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.201043 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:12.232701 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:12.232731 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.288687 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.288722 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.303401 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.303479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.371087 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.371112 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:12.371125 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:14.899732 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.913037 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:14.913112 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:14.939368 1498704 cri.go:89] found id: ""
	I1217 02:11:14.939399 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.939408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.939415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:14.939476 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:14.964809 1498704 cri.go:89] found id: ""
	I1217 02:11:14.964835 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.964844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.964849 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:14.964911 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:14.992442 1498704 cri.go:89] found id: ""
	I1217 02:11:14.992468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.992477 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.992483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:14.992542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:15.029492 1498704 cri.go:89] found id: ""
	I1217 02:11:15.029518 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.029527 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:15.029534 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:15.029604 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:15.059736 1498704 cri.go:89] found id: ""
	I1217 02:11:15.059760 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.059770 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:15.059776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:15.059841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:15.086908 1498704 cri.go:89] found id: ""
	I1217 02:11:15.086991 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.087014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:15.087029 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:15.087104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:15.113800 1498704 cri.go:89] found id: ""
	I1217 02:11:15.113829 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.113838 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.113844 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:15.113903 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:15.140421 1498704 cri.go:89] found id: ""
	I1217 02:11:15.140445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.140454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.140463 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.140475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.197971 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.198003 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.213157 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.213186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.278282 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.278303 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:15.278316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:15.303867 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.303900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.833800 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.844470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:17.844546 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:17.871228 1498704 cri.go:89] found id: ""
	I1217 02:11:17.871254 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.871262 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.871270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:17.871345 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:17.909403 1498704 cri.go:89] found id: ""
	I1217 02:11:17.909430 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.909438 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.909444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:17.909505 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:17.942319 1498704 cri.go:89] found id: ""
	I1217 02:11:17.942341 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.942348 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.942355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:17.942416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:17.967521 1498704 cri.go:89] found id: ""
	I1217 02:11:17.967546 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.967554 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.967561 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:17.967619 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:17.995465 1498704 cri.go:89] found id: ""
	I1217 02:11:17.995488 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.995518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:17.995526 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:17.995587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:18.023559 1498704 cri.go:89] found id: ""
	I1217 02:11:18.023587 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.023596 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.023603 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:18.023664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:18.049983 1498704 cri.go:89] found id: ""
	I1217 02:11:18.050011 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.050027 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.050033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:18.050096 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:18.081999 1498704 cri.go:89] found id: ""
	I1217 02:11:18.082023 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.082033 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.082042 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.082054 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.096662 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.096692 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.160156 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.160179 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:18.160192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:18.185291 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.185325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.216271 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.216298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.775311 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:20.789631 1498704 out.go:203] 
	W1217 02:11:20.792902 1498704 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:11:20.792939 1498704 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:11:20.792950 1498704 out.go:285] * Related issues:
	W1217 02:11:20.792967 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:11:20.792986 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:11:20.795906 1498704 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212356563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212424346Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212528511Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212600537Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212667581Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212731344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212789486Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212848654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212916946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213001919Z" level=info msg="Connect containerd service"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213359100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.214132836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224058338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224260137Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224195259Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.233004319Z" level=info msg="Start recovering state"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.265931194Z" level=info msg="Start event monitor"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266119036Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266183250Z" level=info msg="Start streaming server"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266253167Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266318809Z" level=info msg="runtime interface starting up..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266375187Z" level=info msg="starting plugins..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266454539Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:05:19 newest-cni-456492 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.268086737Z" level=info msg="containerd successfully booted in 0.090817s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.077420   13384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.077910   13384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.079392   13384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.079726   13384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.081293   13384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:11:24 up  7:53,  0 user,  load average: 0.40, 0.70, 1.21
	Linux newest-cni-456492 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:11:20 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:21 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 17 02:11:21 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:21 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:21 newest-cni-456492 kubelet[13263]: E1217 02:11:21.690912   13263 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:21 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:21 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:22 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 17 02:11:22 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:22 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:22 newest-cni-456492 kubelet[13268]: E1217 02:11:22.452130   13268 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:22 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:22 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:23 newest-cni-456492 kubelet[13288]: E1217 02:11:23.201063   13288 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:23 newest-cni-456492 kubelet[13347]: E1217 02:11:23.934749   13347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:23 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (338.346065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-456492" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (373.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:09:50.423409 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:10:09.433708 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:10:35.917764 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:11:33.442437 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:11:56.877346 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:11:58.981507 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:12:56.506891 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:14:50.423609 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:14:52.518112 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:15:09.433753 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:15:35.917290 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1217 02:15:44.301555 1211243 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:16:13.493331 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:16:33.442229 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:16:56.877790 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1217 02:17:12.370701 1211243 config.go:182] Loaded profile config "custom-flannel-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:30.040838 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.047925 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.059377 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.081143 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.122579 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.204383 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:30.367095 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:17:30.689118 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:31.330613 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:32.612397 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:35.174602 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:40.296588 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:17:50.538294 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:18:11.020088 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 2 (494.226763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1494487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:03:06.71743355Z",
	            "FinishedAt": "2025-12-17T02:03:05.348756992Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9255e0863872038f878a0377593d952443e5d8a7e0d1715541fab06d752ef770",
	            "SandboxKey": "/var/run/docker/netns/9255e0863872",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9e:f4:59:45:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "02e66a97e08a8d712f4ba9f711db1ac614b5e96335d8aceb3d7eccb7c2a2e478",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 2 (498.629608ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-178365 logs -n 25: (1.136561139s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-721629 sudo systemctl status kubelet --all --full --no-pager                                                                                        │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl cat kubelet --no-pager                                                                                                        │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                         │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /etc/kubernetes/kubelet.conf                                                                                                        │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /var/lib/kubelet/config.yaml                                                                                                        │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl status docker --all --full --no-pager                                                                                         │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl cat docker --no-pager                                                                                                         │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /etc/docker/daemon.json                                                                                                             │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo docker system info                                                                                                                      │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl status cri-docker --all --full --no-pager                                                                                     │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl cat cri-docker --no-pager                                                                                                     │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                          │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cri-dockerd --version                                                                                                                   │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl status containerd --all --full --no-pager                                                                                     │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl cat containerd --no-pager                                                                                                     │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /lib/systemd/system/containerd.service                                                                                              │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo cat /etc/containerd/config.toml                                                                                                         │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo containerd config dump                                                                                                                  │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl status crio --all --full --no-pager                                                                                           │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	│ ssh     │ -p custom-flannel-721629 sudo systemctl cat crio --no-pager                                                                                                           │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                 │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ ssh     │ -p custom-flannel-721629 sudo crio config                                                                                                                             │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ delete  │ -p custom-flannel-721629                                                                                                                                              │ custom-flannel-721629     │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │ 17 Dec 25 02:17 UTC │
	│ start   │ -p enable-default-cni-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd │ enable-default-cni-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:17:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:17:43.266249 1545299 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:17:43.266420 1545299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:17:43.266451 1545299 out.go:374] Setting ErrFile to fd 2...
	I1217 02:17:43.266472 1545299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:17:43.266735 1545299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:17:43.267173 1545299 out.go:368] Setting JSON to false
	I1217 02:17:43.268086 1545299 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28814,"bootTime":1765909050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:17:43.268185 1545299 start.go:143] virtualization:  
	I1217 02:17:43.274980 1545299 out.go:179] * [enable-default-cni-721629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:17:43.278780 1545299 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:17:43.278864 1545299 notify.go:221] Checking for updates...
	I1217 02:17:43.285892 1545299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:17:43.289254 1545299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:17:43.292423 1545299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:17:43.295676 1545299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:17:43.298883 1545299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:17:43.302426 1545299 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:17:43.302530 1545299 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:17:43.326377 1545299 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:17:43.326496 1545299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:17:43.402247 1545299 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:17:43.392635639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:17:43.402359 1545299 docker.go:319] overlay module found
	I1217 02:17:43.405713 1545299 out.go:179] * Using the docker driver based on user configuration
	I1217 02:17:43.408724 1545299 start.go:309] selected driver: docker
	I1217 02:17:43.408744 1545299 start.go:927] validating driver "docker" against <nil>
	I1217 02:17:43.408759 1545299 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:17:43.409480 1545299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:17:43.483233 1545299 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:17:43.472669278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:17:43.483399 1545299 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1217 02:17:43.483622 1545299 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1217 02:17:43.483647 1545299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:17:43.486710 1545299 out.go:179] * Using Docker driver with root privileges
	I1217 02:17:43.489794 1545299 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:17:43.489824 1545299 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 02:17:43.489910 1545299 start.go:353] cluster config:
	{Name:enable-default-cni-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:17:43.495007 1545299 out.go:179] * Starting "enable-default-cni-721629" primary control-plane node in "enable-default-cni-721629" cluster
	I1217 02:17:43.497801 1545299 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:17:43.500760 1545299 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:17:43.503789 1545299 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:17:43.503850 1545299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:17:43.503854 1545299 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1217 02:17:43.503899 1545299 cache.go:65] Caching tarball of preloaded images
	I1217 02:17:43.504012 1545299 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:17:43.504032 1545299 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1217 02:17:43.504161 1545299 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/config.json ...
	I1217 02:17:43.504200 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/config.json: {Name:mk16a015d70b8cd5a935a156a464c7c813ded68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:43.526459 1545299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:17:43.526484 1545299 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:17:43.526511 1545299 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:17:43.526541 1545299 start.go:360] acquireMachinesLock for enable-default-cni-721629: {Name:mk75f38e9c03f09568c8449ec58c0fc0b24f595d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:17:43.526658 1545299 start.go:364] duration metric: took 95.73µs to acquireMachinesLock for "enable-default-cni-721629"
	I1217 02:17:43.526689 1545299 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-721629 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:17:43.526802 1545299 start.go:125] createHost starting for "" (driver="docker")
	I1217 02:17:43.530113 1545299 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 02:17:43.530388 1545299 start.go:159] libmachine.API.Create for "enable-default-cni-721629" (driver="docker")
	I1217 02:17:43.530422 1545299 client.go:173] LocalClient.Create starting
	I1217 02:17:43.530495 1545299 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 02:17:43.530533 1545299 main.go:143] libmachine: Decoding PEM data...
	I1217 02:17:43.530558 1545299 main.go:143] libmachine: Parsing certificate...
	I1217 02:17:43.530616 1545299 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 02:17:43.530639 1545299 main.go:143] libmachine: Decoding PEM data...
	I1217 02:17:43.530651 1545299 main.go:143] libmachine: Parsing certificate...
	I1217 02:17:43.531022 1545299 cli_runner.go:164] Run: docker network inspect enable-default-cni-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 02:17:43.545926 1545299 cli_runner.go:211] docker network inspect enable-default-cni-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 02:17:43.546011 1545299 network_create.go:284] running [docker network inspect enable-default-cni-721629] to gather additional debugging logs...
	I1217 02:17:43.546034 1545299 cli_runner.go:164] Run: docker network inspect enable-default-cni-721629
	W1217 02:17:43.560063 1545299 cli_runner.go:211] docker network inspect enable-default-cni-721629 returned with exit code 1
	I1217 02:17:43.560091 1545299 network_create.go:287] error running [docker network inspect enable-default-cni-721629]: docker network inspect enable-default-cni-721629: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-721629 not found
	I1217 02:17:43.560105 1545299 network_create.go:289] output of [docker network inspect enable-default-cni-721629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-721629 not found
	
	** /stderr **
	I1217 02:17:43.560215 1545299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:17:43.576195 1545299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 02:17:43.576492 1545299 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 02:17:43.576798 1545299 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 02:17:43.577044 1545299 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 02:17:43.577473 1545299 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e4f90}
	I1217 02:17:43.577492 1545299 network_create.go:124] attempt to create docker network enable-default-cni-721629 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 02:17:43.577555 1545299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-721629 enable-default-cni-721629
	I1217 02:17:43.638601 1545299 network_create.go:108] docker network enable-default-cni-721629 192.168.85.0/24 created
	I1217 02:17:43.638643 1545299 kic.go:121] calculated static IP "192.168.85.2" for the "enable-default-cni-721629" container
	I1217 02:17:43.638734 1545299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 02:17:43.655325 1545299 cli_runner.go:164] Run: docker volume create enable-default-cni-721629 --label name.minikube.sigs.k8s.io=enable-default-cni-721629 --label created_by.minikube.sigs.k8s.io=true
	I1217 02:17:43.672960 1545299 oci.go:103] Successfully created a docker volume enable-default-cni-721629
	I1217 02:17:43.673050 1545299 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-721629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-721629 --entrypoint /usr/bin/test -v enable-default-cni-721629:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 02:17:44.200622 1545299 oci.go:107] Successfully prepared a docker volume enable-default-cni-721629
	I1217 02:17:44.200691 1545299 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:17:44.200704 1545299 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 02:17:44.200778 1545299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-721629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 02:17:48.310851 1545299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-721629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.110018212s)
	I1217 02:17:48.310886 1545299 kic.go:203] duration metric: took 4.110179247s to extract preloaded images to volume ...
	W1217 02:17:48.311024 1545299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 02:17:48.311142 1545299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 02:17:48.361385 1545299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-721629 --name enable-default-cni-721629 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-721629 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-721629 --network enable-default-cni-721629 --ip 192.168.85.2 --volume enable-default-cni-721629:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 02:17:48.662882 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Running}}
	I1217 02:17:48.691909 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:17:48.717827 1545299 cli_runner.go:164] Run: docker exec enable-default-cni-721629 stat /var/lib/dpkg/alternatives/iptables
	I1217 02:17:48.773577 1545299 oci.go:144] the created container "enable-default-cni-721629" has a running status.
	I1217 02:17:48.773606 1545299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa...
	I1217 02:17:49.161518 1545299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 02:17:49.199383 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:17:49.222458 1545299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 02:17:49.222480 1545299 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-721629 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 02:17:49.277313 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:17:49.297059 1545299 machine.go:94] provisionDockerMachine start ...
	I1217 02:17:49.297160 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:49.314072 1545299 main.go:143] libmachine: Using SSH client type: native
	I1217 02:17:49.314417 1545299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I1217 02:17:49.314426 1545299 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:17:49.315078 1545299 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:17:52.453405 1545299 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-721629
	
	I1217 02:17:52.453441 1545299 ubuntu.go:182] provisioning hostname "enable-default-cni-721629"
	I1217 02:17:52.453531 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:52.471631 1545299 main.go:143] libmachine: Using SSH client type: native
	I1217 02:17:52.471941 1545299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I1217 02:17:52.471958 1545299 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-721629 && echo "enable-default-cni-721629" | sudo tee /etc/hostname
	I1217 02:17:52.611011 1545299 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-721629
	
	I1217 02:17:52.611104 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:52.628666 1545299 main.go:143] libmachine: Using SSH client type: native
	I1217 02:17:52.628980 1545299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I1217 02:17:52.628997 1545299 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-721629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-721629/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-721629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:17:52.757917 1545299 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:17:52.757943 1545299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:17:52.757974 1545299 ubuntu.go:190] setting up certificates
	I1217 02:17:52.757990 1545299 provision.go:84] configureAuth start
	I1217 02:17:52.758054 1545299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-721629
	I1217 02:17:52.775235 1545299 provision.go:143] copyHostCerts
	I1217 02:17:52.775304 1545299 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:17:52.775317 1545299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:17:52.775393 1545299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:17:52.775520 1545299 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:17:52.775531 1545299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:17:52.775564 1545299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:17:52.775644 1545299 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:17:52.775655 1545299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:17:52.775681 1545299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:17:52.775744 1545299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-721629 san=[127.0.0.1 192.168.85.2 enable-default-cni-721629 localhost minikube]
	I1217 02:17:52.976308 1545299 provision.go:177] copyRemoteCerts
	I1217 02:17:52.976374 1545299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:17:52.976416 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:52.993558 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:17:53.089533 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:17:53.108192 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 02:17:53.128498 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:17:53.148790 1545299 provision.go:87] duration metric: took 390.773427ms to configureAuth
	I1217 02:17:53.148821 1545299 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:17:53.149015 1545299 config.go:182] Loaded profile config "enable-default-cni-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 02:17:53.149029 1545299 machine.go:97] duration metric: took 3.851953011s to provisionDockerMachine
	I1217 02:17:53.149037 1545299 client.go:176] duration metric: took 9.618604688s to LocalClient.Create
	I1217 02:17:53.149051 1545299 start.go:167] duration metric: took 9.618665513s to libmachine.API.Create "enable-default-cni-721629"
	I1217 02:17:53.149063 1545299 start.go:293] postStartSetup for "enable-default-cni-721629" (driver="docker")
	I1217 02:17:53.149072 1545299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:17:53.149141 1545299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:17:53.149183 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:53.168962 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:17:53.271846 1545299 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:17:53.275318 1545299 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:17:53.275349 1545299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:17:53.275361 1545299 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:17:53.275413 1545299 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:17:53.275511 1545299 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:17:53.275623 1545299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:17:53.282992 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:17:53.300181 1545299 start.go:296] duration metric: took 151.10418ms for postStartSetup
	I1217 02:17:53.300546 1545299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-721629
	I1217 02:17:53.317316 1545299 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/config.json ...
	I1217 02:17:53.317602 1545299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:17:53.317709 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:53.334283 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:17:53.427333 1545299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:17:53.432566 1545299 start.go:128] duration metric: took 9.905748832s to createHost
	I1217 02:17:53.432591 1545299 start.go:83] releasing machines lock for "enable-default-cni-721629", held for 9.905919788s
	I1217 02:17:53.432662 1545299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-721629
	I1217 02:17:53.451114 1545299 ssh_runner.go:195] Run: cat /version.json
	I1217 02:17:53.451169 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:53.451171 1545299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:17:53.451229 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:17:53.482446 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:17:53.487671 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:17:53.581380 1545299 ssh_runner.go:195] Run: systemctl --version
	I1217 02:17:53.672698 1545299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:17:53.677081 1545299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:17:53.677198 1545299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:17:53.705315 1545299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 02:17:53.705394 1545299 start.go:496] detecting cgroup driver to use...
	I1217 02:17:53.705441 1545299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:17:53.705532 1545299 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:17:53.720830 1545299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:17:53.733765 1545299 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:17:53.733837 1545299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:17:53.751578 1545299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:17:53.770544 1545299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:17:53.895521 1545299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:17:54.026537 1545299 docker.go:234] disabling docker service ...
	I1217 02:17:54.026610 1545299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:17:54.050906 1545299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:17:54.064881 1545299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:17:54.190604 1545299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:17:54.304974 1545299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:17:54.318261 1545299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:17:54.331935 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:17:54.340598 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:17:54.349217 1545299 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:17:54.349298 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:17:54.358355 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:17:54.367070 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:17:54.376018 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:17:54.385114 1545299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:17:54.393144 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:17:54.402356 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:17:54.411240 1545299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:17:54.420453 1545299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:17:54.428413 1545299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:17:54.436032 1545299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:17:54.567087 1545299 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:17:54.726765 1545299 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:17:54.726851 1545299 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:17:54.730768 1545299 start.go:564] Will wait 60s for crictl version
	I1217 02:17:54.730879 1545299 ssh_runner.go:195] Run: which crictl
	I1217 02:17:54.734350 1545299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:17:54.757251 1545299 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:17:54.757372 1545299 ssh_runner.go:195] Run: containerd --version
	I1217 02:17:54.779774 1545299 ssh_runner.go:195] Run: containerd --version
	I1217 02:17:54.805020 1545299 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1217 02:17:54.808022 1545299 cli_runner.go:164] Run: docker network inspect enable-default-cni-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:17:54.825723 1545299 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:17:54.829558 1545299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:17:54.840379 1545299 kubeadm.go:884] updating cluster {Name:enable-default-cni-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-721629 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:17:54.840498 1545299 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:17:54.840568 1545299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:17:54.865372 1545299 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:17:54.865406 1545299 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:17:54.865467 1545299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:17:54.889618 1545299 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:17:54.889767 1545299 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:17:54.889782 1545299 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1217 02:17:54.889888 1545299 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-721629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1217 02:17:54.889965 1545299 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:17:54.914687 1545299 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:17:54.914722 1545299 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:17:54.914781 1545299 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-721629 NodeName:enable-default-cni-721629 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:17:54.914921 1545299 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-721629"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:17:54.914998 1545299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 02:17:54.922956 1545299 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:17:54.923027 1545299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:17:54.930646 1545299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1217 02:17:54.944045 1545299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 02:17:54.957050 1545299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1217 02:17:54.969984 1545299 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:17:54.973441 1545299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:17:54.983147 1545299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:17:55.104964 1545299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:17:55.121592 1545299 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629 for IP: 192.168.85.2
	I1217 02:17:55.121614 1545299 certs.go:195] generating shared ca certs ...
	I1217 02:17:55.121630 1545299 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:55.121830 1545299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:17:55.121914 1545299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:17:55.121929 1545299 certs.go:257] generating profile certs ...
	I1217 02:17:55.122013 1545299 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.key
	I1217 02:17:55.122030 1545299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.crt with IP's: []
	I1217 02:17:55.915400 1545299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.crt ...
	I1217 02:17:55.915439 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.crt: {Name:mk34f39b8314abfc279a5a149acc3688bee20123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:55.915638 1545299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.key ...
	I1217 02:17:55.915656 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/client.key: {Name:mkbf277642c7cb7324444a3463fffd06e4d10005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:55.915753 1545299 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key.4f588256
	I1217 02:17:55.915773 1545299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt.4f588256 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 02:17:56.061517 1545299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt.4f588256 ...
	I1217 02:17:56.061557 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt.4f588256: {Name:mkca86229c53c4a1661d991ade21e151ab7ec98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:56.061753 1545299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key.4f588256 ...
	I1217 02:17:56.061769 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key.4f588256: {Name:mk9adcab2eeb392ab498f272f1d966890adcc4da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:56.061856 1545299 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt.4f588256 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt
	I1217 02:17:56.061934 1545299 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key.4f588256 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key
	I1217 02:17:56.061997 1545299 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.key
	I1217 02:17:56.062011 1545299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.crt with IP's: []
	I1217 02:17:56.252376 1545299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.crt ...
	I1217 02:17:56.252408 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.crt: {Name:mk491c9cca16e2a8149cf9c348c529213bf6e1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:56.252598 1545299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.key ...
	I1217 02:17:56.252613 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.key: {Name:mk4d71ec154842b2a0738b7fb09a50fa8cad6274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:17:56.252817 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:17:56.252864 1545299 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:17:56.252878 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:17:56.252904 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:17:56.252934 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:17:56.252961 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:17:56.253011 1545299 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:17:56.253625 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:17:56.272971 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:17:56.290950 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:17:56.309326 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:17:56.327229 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 02:17:56.345095 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:17:56.362651 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:17:56.379798 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/enable-default-cni-721629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:17:56.397935 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:17:56.416363 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:17:56.434716 1545299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:17:56.453738 1545299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:17:56.467002 1545299 ssh_runner.go:195] Run: openssl version
	I1217 02:17:56.473413 1545299 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:17:56.481333 1545299 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:17:56.489116 1545299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:17:56.493072 1545299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:17:56.493139 1545299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:17:56.534173 1545299 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:17:56.541751 1545299 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 02:17:56.549036 1545299 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:17:56.556584 1545299 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:17:56.564421 1545299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:17:56.568293 1545299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:17:56.568408 1545299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:17:56.609056 1545299 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:17:56.616563 1545299 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 02:17:56.626091 1545299 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:17:56.634077 1545299 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:17:56.643872 1545299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:17:56.648477 1545299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:17:56.648608 1545299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:17:56.693883 1545299 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:17:56.701239 1545299 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 02:17:56.708805 1545299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:17:56.712495 1545299 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 02:17:56.712568 1545299 kubeadm.go:401] StartCluster: {Name:enable-default-cni-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-721629 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:17:56.712652 1545299 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:17:56.712718 1545299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:17:56.740394 1545299 cri.go:89] found id: ""
	I1217 02:17:56.740561 1545299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:17:56.748586 1545299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 02:17:56.756440 1545299 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:17:56.756503 1545299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:17:56.764319 1545299 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:17:56.764347 1545299 kubeadm.go:158] found existing configuration files:
	
	I1217 02:17:56.764416 1545299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:17:56.771975 1545299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:17:56.772060 1545299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:17:56.779321 1545299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:17:56.786918 1545299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:17:56.786983 1545299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:17:56.794117 1545299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:17:56.801772 1545299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:17:56.801836 1545299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:17:56.809201 1545299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:17:56.816947 1545299 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:17:56.817013 1545299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:17:56.824307 1545299 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:17:56.864527 1545299 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 02:17:56.864905 1545299 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:17:56.891785 1545299 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:17:56.891864 1545299 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 02:17:56.891907 1545299 kubeadm.go:319] OS: Linux
	I1217 02:17:56.891958 1545299 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:17:56.892010 1545299 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:17:56.892062 1545299 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:17:56.892124 1545299 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:17:56.892176 1545299 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:17:56.892227 1545299 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:17:56.892276 1545299 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:17:56.892328 1545299 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:17:56.892378 1545299 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:17:56.970232 1545299 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:17:56.970346 1545299 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:17:56.970454 1545299 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:17:56.986019 1545299 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:17:56.992247 1545299 out.go:252]   - Generating certificates and keys ...
	I1217 02:17:56.992406 1545299 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:17:56.992509 1545299 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:17:57.236875 1545299 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:17:57.719938 1545299 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:17:57.824039 1545299 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:17:58.166490 1545299 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:17:58.460342 1545299 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:17:58.460609 1545299 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-721629 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 02:17:58.900425 1545299 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:17:58.900931 1545299 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-721629 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 02:17:59.086640 1545299 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:17:59.664894 1545299 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:17:59.766631 1545299 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:17:59.766829 1545299 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:18:00.548894 1545299 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:18:00.644289 1545299 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:18:01.206844 1545299 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:18:01.600716 1545299 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:18:02.047210 1545299 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:18:02.047975 1545299 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:18:02.050843 1545299 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:18:02.054518 1545299 out.go:252]   - Booting up control plane ...
	I1217 02:18:02.054622 1545299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:18:02.054700 1545299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:18:02.054767 1545299 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:18:02.072630 1545299 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:18:02.072749 1545299 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:18:02.081411 1545299 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:18:02.082068 1545299 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:18:02.082184 1545299 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:18:02.242485 1545299 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:18:02.242604 1545299 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:18:03.742009 1545299 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501596059s
	I1217 02:18:03.744939 1545299 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 02:18:03.745235 1545299 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 02:18:03.745331 1545299 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 02:18:03.745411 1545299 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 02:18:07.923857 1545299 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.178465163s
	I1217 02:18:09.892203 1545299 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.147181526s
	I1217 02:18:11.247220 1545299 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.501984329s
	I1217 02:18:11.282132 1545299 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 02:18:11.306951 1545299 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 02:18:11.325056 1545299 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 02:18:11.325270 1545299 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-721629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 02:18:11.344486 1545299 kubeadm.go:319] [bootstrap-token] Using token: tkxt5r.igi7ri0yehb1v5ky
	I1217 02:18:11.347574 1545299 out.go:252]   - Configuring RBAC rules ...
	I1217 02:18:11.347703 1545299 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 02:18:11.356614 1545299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 02:18:11.377726 1545299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 02:18:11.384810 1545299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 02:18:11.394632 1545299 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 02:18:11.405155 1545299 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 02:18:11.654548 1545299 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 02:18:12.082587 1545299 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 02:18:12.656354 1545299 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 02:18:12.657563 1545299 kubeadm.go:319] 
	I1217 02:18:12.657638 1545299 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 02:18:12.657672 1545299 kubeadm.go:319] 
	I1217 02:18:12.657750 1545299 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 02:18:12.657754 1545299 kubeadm.go:319] 
	I1217 02:18:12.657779 1545299 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 02:18:12.657838 1545299 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 02:18:12.657887 1545299 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 02:18:12.657891 1545299 kubeadm.go:319] 
	I1217 02:18:12.657945 1545299 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 02:18:12.657948 1545299 kubeadm.go:319] 
	I1217 02:18:12.657996 1545299 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 02:18:12.657999 1545299 kubeadm.go:319] 
	I1217 02:18:12.658051 1545299 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 02:18:12.658126 1545299 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 02:18:12.658194 1545299 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 02:18:12.658198 1545299 kubeadm.go:319] 
	I1217 02:18:12.658282 1545299 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 02:18:12.658360 1545299 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 02:18:12.658364 1545299 kubeadm.go:319] 
	I1217 02:18:12.658453 1545299 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tkxt5r.igi7ri0yehb1v5ky \
	I1217 02:18:12.658557 1545299 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6031ce8e9641affed80fbd3275524f7a99669ab559b9101b175d38b0e710ae78 \
	I1217 02:18:12.658577 1545299 kubeadm.go:319] 	--control-plane 
	I1217 02:18:12.658581 1545299 kubeadm.go:319] 
	I1217 02:18:12.658673 1545299 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 02:18:12.658677 1545299 kubeadm.go:319] 
	I1217 02:18:12.658759 1545299 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tkxt5r.igi7ri0yehb1v5ky \
	I1217 02:18:12.658861 1545299 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6031ce8e9641affed80fbd3275524f7a99669ab559b9101b175d38b0e710ae78 
	I1217 02:18:12.662347 1545299 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 02:18:12.662579 1545299 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:18:12.662689 1545299 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:18:12.662708 1545299 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:18:12.665897 1545299 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 02:18:12.668887 1545299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 02:18:12.676710 1545299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 02:18:12.691873 1545299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 02:18:12.691984 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:12.692046 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-721629 minikube.k8s.io/updated_at=2025_12_17T02_18_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=enable-default-cni-721629 minikube.k8s.io/primary=true
	I1217 02:18:12.853296 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:12.853366 1545299 ops.go:34] apiserver oom_adj: -16
	I1217 02:18:13.353832 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:13.853540 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:14.353508 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:14.853732 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:15.354277 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:15.854233 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:16.354396 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:16.853930 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:17.353779 1545299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:18:17.650584 1545299 kubeadm.go:1114] duration metric: took 4.958639484s to wait for elevateKubeSystemPrivileges
	I1217 02:18:17.650610 1545299 kubeadm.go:403] duration metric: took 20.938048641s to StartCluster
	I1217 02:18:17.650630 1545299 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:18:17.650689 1545299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:18:17.651693 1545299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:18:17.651883 1545299 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:18:17.652029 1545299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 02:18:17.652487 1545299 config.go:182] Loaded profile config "enable-default-cni-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 02:18:17.652524 1545299 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:18:17.652582 1545299 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-721629"
	I1217 02:18:17.652596 1545299 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-721629"
	I1217 02:18:17.652616 1545299 host.go:66] Checking if "enable-default-cni-721629" exists ...
	I1217 02:18:17.653109 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:18:17.654299 1545299 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-721629"
	I1217 02:18:17.654320 1545299 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-721629"
	I1217 02:18:17.654598 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:18:17.657488 1545299 out.go:179] * Verifying Kubernetes components...
	I1217 02:18:17.662699 1545299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:18:17.686353 1545299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:18:17.689310 1545299 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:18:17.689347 1545299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:18:17.689415 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:18:17.724688 1545299 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-721629"
	I1217 02:18:17.724730 1545299 host.go:66] Checking if "enable-default-cni-721629" exists ...
	I1217 02:18:17.725218 1545299 cli_runner.go:164] Run: docker container inspect enable-default-cni-721629 --format={{.State.Status}}
	I1217 02:18:17.735910 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	I1217 02:18:17.770877 1545299 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:18:17.770900 1545299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:18:17.770972 1545299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-721629
	I1217 02:18:17.803465 1545299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/enable-default-cni-721629/id_rsa Username:docker}
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348124275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348135139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348172948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348191221Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348204899Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348219340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348228637Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348243127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348261737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348290923Z" level=info msg="Connect containerd service"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348584284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.349144971Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.367921231Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368000485Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368028342Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368075579Z" level=info msg="Start recovering state"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409358181Z" level=info msg="Start event monitor"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409558676Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409664105Z" level=info msg="Start streaming server"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409753861Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409976198Z" level=info msg="runtime interface starting up..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410064724Z" level=info msg="starting plugins..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410151470Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:03:12 no-preload-178365 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.416611073Z" level=info msg="containerd successfully booted in 0.090598s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:18:19.336908    8083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:18:19.338477    8083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:18:19.340124    8083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:18:19.340495    8083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:18:19.342096    8083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:18:19 up  8:00,  0 user,  load average: 3.46, 1.78, 1.47
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:18:15 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:18:16 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 17 02:18:16 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:16 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:16 no-preload-178365 kubelet[7948]: E1217 02:18:16.670850    7948 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:18:16 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:18:16 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:18:17 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 17 02:18:17 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:17 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:17 no-preload-178365 kubelet[7953]: E1217 02:18:17.465026    7953 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:18:17 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:18:17 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:18 no-preload-178365 kubelet[7981]: E1217 02:18:18.245551    7981 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:18:18 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:18:18 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:18 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:18:19 no-preload-178365 kubelet[8025]: E1217 02:18:19.060826    8025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:18:19 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:18:19 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 2 (484.332021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-456492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (317.850587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-456492 -n newest-cni-456492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (299.478601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-456492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (302.584604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-456492 -n newest-cni-456492
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (309.802904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-456492
helpers_test.go:244: (dbg) docker inspect newest-cni-456492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	        "Created": "2025-12-17T01:55:16.478266179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1498839,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:13.106483917Z",
	            "FinishedAt": "2025-12-17T02:05:11.800057613Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hosts",
	        "LogPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2-json.log",
	        "Name": "/newest-cni-456492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-456492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-456492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	                "LowerDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456492",
	                "Source": "/var/lib/docker/volumes/newest-cni-456492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456492",
	                "name.minikube.sigs.k8s.io": "newest-cni-456492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab62f167f6067cd4de4467e8c5dccfa413a051915ec69dabeccc65bc59cf0aee",
	            "SandboxKey": "/var/run/docker/netns/ab62f167f606",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-456492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:ab:b6:47:86:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78c732410c8ee8b3c147900aac111eb07f35c057f64efcecb5d20570fed785bc",
	                    "EndpointID": "c3b1f12eab3f1b8581f7a3375c215b8790019ebdc7d258d9fd03a25fc5d36dd1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456492",
	                        "72c4fe7eb784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (315.334024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25: (1.667521999s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p newest-cni-456492 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-456492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ image   │ newest-cni-456492 image list --format=json                                                                                                                                                                                                                 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	│ pause   │ -p newest-cni-456492 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	│ unpause │ -p newest-cni-456492 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:12.850501 1498704 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:05:12.850637 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.850649 1498704 out.go:374] Setting ErrFile to fd 2...
	I1217 02:05:12.850655 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.851041 1498704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:05:12.851511 1498704 out.go:368] Setting JSON to false
	I1217 02:05:12.852479 1498704 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28063,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:05:12.852572 1498704 start.go:143] virtualization:  
	I1217 02:05:12.855474 1498704 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:05:12.857672 1498704 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:12.857773 1498704 notify.go:221] Checking for updates...
	I1217 02:05:12.863254 1498704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:12.866037 1498704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:12.868948 1498704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:05:12.871863 1498704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:05:12.874787 1498704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:12.878103 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:12.878662 1498704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:12.900447 1498704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:05:12.900598 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:12.960234 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:12.950894493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:12.960347 1498704 docker.go:319] overlay module found
	I1217 02:05:12.963370 1498704 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:12.966210 1498704 start.go:309] selected driver: docker
	I1217 02:05:12.966233 1498704 start.go:927] validating driver "docker" against &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:12.966382 1498704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:12.967091 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:13.019814 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:13.010546439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:13.020178 1498704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:05:13.020210 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:13.020262 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:13.020307 1498704 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:13.023434 1498704 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 02:05:13.026234 1498704 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:05:13.029131 1498704 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:13.031994 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:13.032048 1498704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 02:05:13.032060 1498704 cache.go:65] Caching tarball of preloaded images
	I1217 02:05:13.032113 1498704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:13.032150 1498704 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:05:13.032162 1498704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 02:05:13.032281 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.052501 1498704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:13.052525 1498704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:13.052542 1498704 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:13.052572 1498704 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:13.052633 1498704 start.go:364] duration metric: took 38.597µs to acquireMachinesLock for "newest-cni-456492"
	I1217 02:05:13.052657 1498704 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:13.052663 1498704 fix.go:54] fixHost starting: 
	I1217 02:05:13.052926 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.069585 1498704 fix.go:112] recreateIfNeeded on newest-cni-456492: state=Stopped err=<nil>
	W1217 02:05:13.069617 1498704 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 02:05:11.635157 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:14.135122 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:16.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:13.072747 1498704 out.go:252] * Restarting existing docker container for "newest-cni-456492" ...
	I1217 02:05:13.072837 1498704 cli_runner.go:164] Run: docker start newest-cni-456492
	I1217 02:05:13.388698 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.414091 1498704 kic.go:430] container "newest-cni-456492" state is running.
	I1217 02:05:13.414525 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:13.433261 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.433961 1498704 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:13.434162 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:13.455043 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:13.455367 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:13.455376 1498704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:13.456190 1498704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:16.589394 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.589424 1498704 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 02:05:16.589509 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.608291 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.608611 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.608628 1498704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 02:05:16.748318 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.748417 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.766749 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.767082 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.767106 1498704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:16.899757 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:16.899788 1498704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:05:16.899820 1498704 ubuntu.go:190] setting up certificates
	I1217 02:05:16.899839 1498704 provision.go:84] configureAuth start
	I1217 02:05:16.899906 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:16.924665 1498704 provision.go:143] copyHostCerts
	I1217 02:05:16.924743 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:05:16.924752 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:05:16.924828 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:05:16.924938 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:05:16.924943 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:05:16.924976 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:05:16.925038 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:05:16.925047 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:05:16.925072 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:05:16.925127 1498704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 02:05:17.601803 1498704 provision.go:177] copyRemoteCerts
	I1217 02:05:17.601873 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:17.601926 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.636357 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.741722 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:05:17.761034 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:17.779707 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:17.797837 1498704 provision.go:87] duration metric: took 897.968313ms to configureAuth
	I1217 02:05:17.797870 1498704 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:17.798087 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:17.798100 1498704 machine.go:97] duration metric: took 4.364124237s to provisionDockerMachine
	I1217 02:05:17.798118 1498704 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 02:05:17.798134 1498704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:17.798198 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:17.798254 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.815970 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.909838 1498704 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:17.913351 1498704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:17.913383 1498704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:17.913395 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:05:17.913453 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:05:17.913544 1498704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:05:17.913681 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:17.921360 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:17.939679 1498704 start.go:296] duration metric: took 141.5414ms for postStartSetup
	I1217 02:05:17.939826 1498704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:17.939877 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.957594 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.059706 1498704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:18.065122 1498704 fix.go:56] duration metric: took 5.012436797s for fixHost
	I1217 02:05:18.065156 1498704 start.go:83] releasing machines lock for "newest-cni-456492", held for 5.012509749s
	I1217 02:05:18.065242 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:18.082756 1498704 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:18.082825 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.083064 1498704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:05:18.083126 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.102210 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.102306 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.193581 1498704 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:18.286865 1498704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:18.291506 1498704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:18.291604 1498704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:18.301001 1498704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:18.301023 1498704 start.go:496] detecting cgroup driver to use...
	I1217 02:05:18.301056 1498704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:18.301104 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:05:18.318916 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:18.332388 1498704 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:05:18.332450 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:05:18.348560 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:05:18.361841 1498704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:05:18.501489 1498704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:05:18.625467 1498704 docker.go:234] disabling docker service ...
	I1217 02:05:18.625544 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:05:18.642408 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:05:18.656014 1498704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:05:18.765362 1498704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:05:18.886790 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:18.900617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:18.915221 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:18.924900 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:18.934313 1498704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:18.934389 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:18.943795 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.953183 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:18.962127 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.971122 1498704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:18.979419 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:18.988380 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:18.999817 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:19.010244 1498704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:19.018996 1498704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:19.026929 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.133908 1498704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:19.268405 1498704 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:05:19.268490 1498704 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:05:19.272284 1498704 start.go:564] Will wait 60s for crictl version
	I1217 02:05:19.272347 1498704 ssh_runner.go:195] Run: which crictl
	I1217 02:05:19.275756 1498704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:19.301130 1498704 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:05:19.301201 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.322372 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.348617 1498704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:05:19.351633 1498704 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:05:19.367774 1498704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:05:19.371830 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.384786 1498704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:05:19.387816 1498704 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:19.387972 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:19.388067 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.414283 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.414309 1498704 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:05:19.414396 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.439246 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.439272 1498704 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:19.439280 1498704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:05:19.439400 1498704 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:19.439475 1498704 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:05:19.464932 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:19.464957 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:19.464978 1498704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:05:19.465000 1498704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:19.465118 1498704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:19.465204 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:19.473220 1498704 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:19.473323 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:19.481191 1498704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:05:19.494733 1498704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:19.508255 1498704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 02:05:19.521299 1498704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:19.524923 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.534869 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.640328 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:19.658104 1498704 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 02:05:19.658171 1498704 certs.go:195] generating shared ca certs ...
	I1217 02:05:19.658202 1498704 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:19.658408 1498704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:05:19.658487 1498704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:05:19.658525 1498704 certs.go:257] generating profile certs ...
	I1217 02:05:19.658693 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 02:05:19.658805 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 02:05:19.658882 1498704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 02:05:19.659021 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:05:19.659079 1498704 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:19.659103 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:05:19.659164 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:05:19.659220 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:05:19.659286 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:05:19.659364 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:19.660007 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:19.680759 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:05:19.702848 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:19.724636 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:05:19.743745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:19.766745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:19.785567 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:19.805217 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:05:19.823885 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:05:19.842565 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:05:19.861136 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:19.881009 1498704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:19.900011 1498704 ssh_runner.go:195] Run: openssl version
	I1217 02:05:19.907885 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.916589 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:19.925294 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929759 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929879 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.973048 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:19.981056 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:05:19.988859 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:05:19.996704 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001580 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001857 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.047306 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:20.055839 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.063938 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:05:20.072095 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076535 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076605 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.118765 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:20.126976 1498704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:20.131206 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:20.172934 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:20.214362 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:20.255854 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:20.297036 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:20.339864 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:20.381722 1498704 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:20.381822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:05:20.381904 1498704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:05:20.424644 1498704 cri.go:89] found id: ""
	I1217 02:05:20.424764 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:20.433427 1498704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:20.433456 1498704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:20.433550 1498704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:20.441251 1498704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:20.442099 1498704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.442456 1498704 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456492" cluster setting kubeconfig missing "newest-cni-456492" context setting]
	I1217 02:05:20.442986 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.445078 1498704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:20.453918 1498704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:05:20.453968 1498704 kubeadm.go:602] duration metric: took 20.505601ms to restartPrimaryControlPlane
	I1217 02:05:20.453978 1498704 kubeadm.go:403] duration metric: took 72.266987ms to StartCluster
	I1217 02:05:20.453993 1498704 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.454058 1498704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.454938 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.455145 1498704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:05:20.455516 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:20.455530 1498704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:20.455683 1498704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456492"
	I1217 02:05:20.455704 1498704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456492"
	I1217 02:05:20.455734 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456291 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.456447 1498704 addons.go:70] Setting dashboard=true in profile "newest-cni-456492"
	I1217 02:05:20.456459 1498704 addons.go:239] Setting addon dashboard=true in "newest-cni-456492"
	W1217 02:05:20.456465 1498704 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:20.456487 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456873 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.457295 1498704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456492"
	I1217 02:05:20.457327 1498704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456492"
	I1217 02:05:20.457617 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.460758 1498704 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:20.464032 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:20.511072 1498704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:20.511238 1498704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:20.511526 1498704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456492"
	I1217 02:05:20.511584 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.512215 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.514400 1498704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.514426 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:20.514495 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.517419 1498704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 02:05:18.635204 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:21.135093 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:20.520345 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:20.520380 1498704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:20.520470 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.545933 1498704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.545958 1498704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:20.546028 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.571506 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.597655 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.610038 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.744231 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:20.749535 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.770211 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.807578 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:20.807656 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:20.822894 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:20.822966 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:20.838508 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:20.838583 1498704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:20.854473 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:20.854546 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:20.870442 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:20.870510 1498704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:20.892689 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:20.892763 1498704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:20.907212 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:20.907283 1498704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:05:20.920377 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:20.920447 1498704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:20.934242 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:20.934313 1498704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:20.949356 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:21.122136 1498704 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:05:21.122238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:21.122377 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122428 1498704 retry.go:31] will retry after 140.698925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122498 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122514 1498704 retry.go:31] will retry after 200.872114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122730 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122750 1498704 retry.go:31] will retry after 347.753215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.264115 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.324524 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:21.326955 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.326987 1498704 retry.go:31] will retry after 509.503403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.390952 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.391056 1498704 retry.go:31] will retry after 486.50092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.471226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.536155 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.536193 1498704 retry.go:31] will retry after 374.340896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.623199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:21.836797 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.878378 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:21.911452 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.932525 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.932573 1498704 retry.go:31] will retry after 673.446858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.024062 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.024104 1498704 retry.go:31] will retry after 357.640722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.030810 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.030855 1498704 retry.go:31] will retry after 697.108634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.122842 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:22.382402 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.447494 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.447529 1498704 retry.go:31] will retry after 907.58474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.606794 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:22.623237 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:22.712284 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.712316 1498704 retry.go:31] will retry after 1.166453431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.728640 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.790257 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.790294 1498704 retry.go:31] will retry after 693.242896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:23.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:25.634571 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:23.122710 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.356122 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:23.441808 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.441876 1498704 retry.go:31] will retry after 812.660244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.484193 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:23.553009 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.553088 1498704 retry.go:31] will retry after 1.540590446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.878932 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:23.940625 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.940657 1498704 retry.go:31] will retry after 1.715347401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.123129 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:24.255570 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.318166 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.318201 1498704 retry.go:31] will retry after 2.528105033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.622416 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.094702 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.122740 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:25.190434 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.190468 1498704 retry.go:31] will retry after 2.137532007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.622874 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.656976 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.735191 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.735228 1498704 retry.go:31] will retry after 1.824141068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.122718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.622402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.847039 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:26.915825 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.915864 1498704 retry.go:31] will retry after 3.628983163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.123109 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:27.329106 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:27.406949 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.406981 1498704 retry.go:31] will retry after 4.03347247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.560441 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:27.620941 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.620972 1498704 retry.go:31] will retry after 3.991176553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.623048 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:27.635077 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:29.635231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:28.123323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.622690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.123056 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.622383 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.122331 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.545057 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:30.621785 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.621822 1498704 retry.go:31] will retry after 4.4452238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.622853 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.122373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.440743 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:31.509992 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.510031 1498704 retry.go:31] will retry after 5.407597033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.613135 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:31.622584 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:31.697739 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.697776 1498704 retry.go:31] will retry after 2.825488937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:32.122427 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:32.622356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:32.134521 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:34.135119 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:36.135210 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:33.122865 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.622376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.122833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.523532 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:34.583134 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.583163 1498704 retry.go:31] will retry after 5.545323918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.622442 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:35.068147 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:35.122850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:35.134133 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.134169 1498704 retry.go:31] will retry after 4.861802964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.622377 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.122369 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.622378 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.918683 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:36.978447 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.978481 1498704 retry.go:31] will retry after 6.962519237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:37.122560 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:37.622836 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:38.635154 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:41.134707 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:38.122524 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.622862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.122871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.623166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.996206 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:40.063255 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.063292 1498704 retry.go:31] will retry after 7.781680021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.122526 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:40.129164 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:40.214505 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.214533 1498704 retry.go:31] will retry after 8.678807682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.622298 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.122333 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.622358 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.127159 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.622438 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:43.635439 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:46.135272 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:43.122461 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.622352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.941994 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:44.001689 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.001730 1498704 retry.go:31] will retry after 6.066883065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.123123 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:44.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.126164 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.623052 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.122898 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.622334 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.122393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.622323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.845223 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:48.634542 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:50.635081 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:47.908667 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:47.908705 1498704 retry.go:31] will retry after 18.007710991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.122861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.622412 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.894229 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:48.969090 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.969125 1498704 retry.go:31] will retry after 16.055685136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.122381 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:49.622837 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:50.069336 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:50.122996 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:50.134357 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.134397 1498704 retry.go:31] will retry after 18.576318696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.622399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.122356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.623152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.122522 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.622365 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:53.135083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:55.135448 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:53.123228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.622373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.622394 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.122388 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.122434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.622357 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.122345 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.622407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:57.635130 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:00.134795 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:58.122690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.622871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.122944 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.622822 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.123626 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.623133 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.122517 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.622861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.122995 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.622415 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:02.135223 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:04.634982 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:03.122366 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.623001 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.122805 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.622382 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.025226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:05.088234 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.088268 1498704 retry.go:31] will retry after 18.521411157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.122353 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.622518 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.916578 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:05.977704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.977737 1498704 retry.go:31] will retry after 29.235613176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.123051 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:06.623116 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.122863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.622361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:07.134988 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:09.135112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:11.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:08.123131 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.622326 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.711597 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:08.773115 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:08.773147 1498704 retry.go:31] will retry after 24.92518591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:09.122643 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:09.622393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.122375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.622634 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.122959 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.622850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.122346 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.622435 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:13.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:16.134662 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:13.122648 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.622828 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.123317 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.622872 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.122361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.622296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.622835 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.122778 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:18.135126 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:20.135188 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:18.123152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.623163 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.122407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.622841 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.123196 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.622898 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:20.622982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:20.655063 1498704 cri.go:89] found id: ""
	I1217 02:06:20.655091 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.655100 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:20.655106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:20.655169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:20.687901 1498704 cri.go:89] found id: ""
	I1217 02:06:20.687924 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.687932 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:20.687938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:20.687996 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:20.713818 1498704 cri.go:89] found id: ""
	I1217 02:06:20.713845 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.713854 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:20.713860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:20.713918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:20.738353 1498704 cri.go:89] found id: ""
	I1217 02:06:20.738376 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.738384 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:20.738396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:20.738455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:20.763275 1498704 cri.go:89] found id: ""
	I1217 02:06:20.763300 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.763309 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:20.763316 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:20.763377 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:20.787303 1498704 cri.go:89] found id: ""
	I1217 02:06:20.787328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.787337 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:20.787343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:20.787402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:20.812203 1498704 cri.go:89] found id: ""
	I1217 02:06:20.812230 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.812238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:20.812244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:20.812304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:20.836788 1498704 cri.go:89] found id: ""
	I1217 02:06:20.836814 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.836823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:20.836831 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:20.836842 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:20.901301 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:20.901324 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:20.901337 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:20.927207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:20.927244 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:20.955351 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:20.955377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:21.010892 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:21.010928 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:22.635190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:25.135234 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:23.526340 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:23.536950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:23.537021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:23.561240 1498704 cri.go:89] found id: ""
	I1217 02:06:23.561267 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.561276 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:23.561282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:23.561340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:23.586385 1498704 cri.go:89] found id: ""
	I1217 02:06:23.586407 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.586415 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:23.586421 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:23.586479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:23.610820 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:06:23.612177 1498704 cri.go:89] found id: ""
	I1217 02:06:23.612201 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.612210 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:23.612216 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:23.612270 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1217 02:06:23.698147 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698227 1498704 retry.go:31] will retry after 35.769421328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698299 1498704 cri.go:89] found id: ""
	I1217 02:06:23.698328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.698348 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:23.698379 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:23.698473 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:23.730479 1498704 cri.go:89] found id: ""
	I1217 02:06:23.730555 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.730569 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:23.730577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:23.730656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:23.757694 1498704 cri.go:89] found id: ""
	I1217 02:06:23.757717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.757726 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:23.757732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:23.757802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:23.787070 1498704 cri.go:89] found id: ""
	I1217 02:06:23.787145 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.787162 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:23.787170 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:23.787231 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:23.815895 1498704 cri.go:89] found id: ""
	I1217 02:06:23.815928 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.815937 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:23.815947 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:23.815977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:23.845530 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:23.845558 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:23.904348 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:23.904385 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.919409 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:23.919438 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:23.986183 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:23.986246 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:23.986266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:26.512910 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:26.523572 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:26.523644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:26.549045 1498704 cri.go:89] found id: ""
	I1217 02:06:26.549077 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.549087 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:26.549100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:26.549181 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:26.573386 1498704 cri.go:89] found id: ""
	I1217 02:06:26.573409 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.573417 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:26.573423 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:26.573485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:26.597629 1498704 cri.go:89] found id: ""
	I1217 02:06:26.597673 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.597688 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:26.597695 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:26.597755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:26.625905 1498704 cri.go:89] found id: ""
	I1217 02:06:26.625933 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.625942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:26.625949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:26.626016 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:26.663442 1498704 cri.go:89] found id: ""
	I1217 02:06:26.663466 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.663475 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:26.663482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:26.663565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:26.692315 1498704 cri.go:89] found id: ""
	I1217 02:06:26.692342 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.692351 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:26.692362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:26.692422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:26.718259 1498704 cri.go:89] found id: ""
	I1217 02:06:26.718287 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.718296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:26.718303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:26.718361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:26.743360 1498704 cri.go:89] found id: ""
	I1217 02:06:26.743383 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.743391 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:26.743400 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:26.743412 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:26.770132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:26.770158 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:26.829657 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:26.829749 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:26.845511 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:26.845538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:26.912984 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:26.913004 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:26.913017 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:27.635261 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:30.135207 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:29.440066 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:29.450548 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:29.450621 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:29.474768 1498704 cri.go:89] found id: ""
	I1217 02:06:29.474800 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.474809 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:29.474816 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:29.474886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:29.498947 1498704 cri.go:89] found id: ""
	I1217 02:06:29.498969 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.498977 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:29.498983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:29.499041 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:29.523540 1498704 cri.go:89] found id: ""
	I1217 02:06:29.523564 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.523573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:29.523579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:29.523643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:29.556044 1498704 cri.go:89] found id: ""
	I1217 02:06:29.556069 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.556078 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:29.556084 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:29.556144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:29.581373 1498704 cri.go:89] found id: ""
	I1217 02:06:29.581399 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.581408 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:29.581414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:29.581485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:29.607453 1498704 cri.go:89] found id: ""
	I1217 02:06:29.607479 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.607489 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:29.607495 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:29.607576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:29.639841 1498704 cri.go:89] found id: ""
	I1217 02:06:29.639865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.639875 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:29.639881 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:29.639938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:29.670608 1498704 cri.go:89] found id: ""
	I1217 02:06:29.670635 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.670643 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:29.670653 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:29.670665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:29.728148 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:29.728181 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:29.743004 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:29.743029 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:29.815740 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:29.815762 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:29.815775 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.842206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:29.842243 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:32.370825 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:32.383399 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:32.383490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:32.416122 1498704 cri.go:89] found id: ""
	I1217 02:06:32.416148 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.416157 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:32.416164 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:32.416235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:32.450068 1498704 cri.go:89] found id: ""
	I1217 02:06:32.450092 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.450101 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:32.450107 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:32.450176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:32.475101 1498704 cri.go:89] found id: ""
	I1217 02:06:32.475126 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.475135 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:32.475142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:32.475218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:32.500347 1498704 cri.go:89] found id: ""
	I1217 02:06:32.500372 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.500380 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:32.500387 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:32.500447 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:32.525315 1498704 cri.go:89] found id: ""
	I1217 02:06:32.525346 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.525355 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:32.525361 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:32.525440 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:32.550267 1498704 cri.go:89] found id: ""
	I1217 02:06:32.550341 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.550358 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:32.550365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:32.550424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:32.575413 1498704 cri.go:89] found id: ""
	I1217 02:06:32.575438 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.575447 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:32.575453 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:32.575559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:32.603477 1498704 cri.go:89] found id: ""
	I1217 02:06:32.603503 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.603513 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:32.603523 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:32.603568 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:32.669699 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:32.669735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:32.686097 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:32.686126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:32.755583 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:32.755604 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:32.755616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:32.782146 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:32.782195 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:32.135482 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:34.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:33.698737 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:33.767478 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:33.767516 1498704 retry.go:31] will retry after 19.401613005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.214860 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:35.276710 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.276741 1498704 retry.go:31] will retry after 25.686831054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.310030 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:35.320395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:35.320472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:35.344503 1498704 cri.go:89] found id: ""
	I1217 02:06:35.344525 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.344533 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:35.344539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:35.344597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:35.375750 1498704 cri.go:89] found id: ""
	I1217 02:06:35.375773 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.375782 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:35.375788 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:35.375857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:35.403776 1498704 cri.go:89] found id: ""
	I1217 02:06:35.403803 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.403813 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:35.403819 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:35.403878 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:35.437584 1498704 cri.go:89] found id: ""
	I1217 02:06:35.437608 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.437616 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:35.437623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:35.437723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:35.467173 1498704 cri.go:89] found id: ""
	I1217 02:06:35.467207 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.467216 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:35.467223 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:35.467289 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:35.491257 1498704 cri.go:89] found id: ""
	I1217 02:06:35.491284 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.491294 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:35.491301 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:35.491380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:35.515935 1498704 cri.go:89] found id: ""
	I1217 02:06:35.515961 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.515971 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:35.515978 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:35.516077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:35.542706 1498704 cri.go:89] found id: ""
	I1217 02:06:35.542730 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.542739 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:35.542748 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:35.542759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:35.601383 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:35.601428 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:35.616228 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:35.616269 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:35.693548 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:35.693569 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:35.693584 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:35.719247 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:35.719286 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:36.635304 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:39.135165 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:41.135205 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:38.250028 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:38.261967 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:38.262037 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:38.286400 1498704 cri.go:89] found id: ""
	I1217 02:06:38.286423 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.286431 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:38.286437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:38.286499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:38.310618 1498704 cri.go:89] found id: ""
	I1217 02:06:38.310639 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.310647 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:38.310654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:38.310713 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:38.335110 1498704 cri.go:89] found id: ""
	I1217 02:06:38.335136 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.335144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:38.335151 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:38.335214 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:38.364179 1498704 cri.go:89] found id: ""
	I1217 02:06:38.364202 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.364211 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:38.364218 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:38.364278 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:38.402338 1498704 cri.go:89] found id: ""
	I1217 02:06:38.402366 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.402374 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:38.402384 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:38.402443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:38.433053 1498704 cri.go:89] found id: ""
	I1217 02:06:38.433081 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.433090 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:38.433096 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:38.433155 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:38.461635 1498704 cri.go:89] found id: ""
	I1217 02:06:38.461688 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.461698 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:38.461704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:38.461767 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:38.486774 1498704 cri.go:89] found id: ""
	I1217 02:06:38.486798 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.486807 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:38.486816 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:38.486827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:38.543417 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:38.543453 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:38.558472 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:38.558499 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:38.627234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:38.627308 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:38.627336 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:38.656399 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:38.656481 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:41.188669 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:41.199463 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:41.199550 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:41.223737 1498704 cri.go:89] found id: ""
	I1217 02:06:41.223762 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.223771 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:41.223778 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:41.223842 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:41.248972 1498704 cri.go:89] found id: ""
	I1217 02:06:41.248998 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.249014 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:41.249022 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:41.249084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:41.274840 1498704 cri.go:89] found id: ""
	I1217 02:06:41.274873 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.274886 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:41.274892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:41.274965 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:41.302162 1498704 cri.go:89] found id: ""
	I1217 02:06:41.302188 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.302197 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:41.302204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:41.302274 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:41.331745 1498704 cri.go:89] found id: ""
	I1217 02:06:41.331771 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.331780 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:41.331786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:41.331872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:41.366507 1498704 cri.go:89] found id: ""
	I1217 02:06:41.366538 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.366559 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:41.366567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:41.366642 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:41.402343 1498704 cri.go:89] found id: ""
	I1217 02:06:41.402390 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.402400 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:41.402409 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:41.402482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:41.442142 1498704 cri.go:89] found id: ""
	I1217 02:06:41.442169 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.442177 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:41.442187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:41.442198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:41.498349 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:41.498432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:41.514261 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:41.514287 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:41.577450 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:41.577470 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:41.577483 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:41.602731 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:41.602766 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:43.635083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:45.635371 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:44.138863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:44.149308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:44.149424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:44.175006 1498704 cri.go:89] found id: ""
	I1217 02:06:44.175031 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.175040 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:44.175047 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:44.175103 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:44.199571 1498704 cri.go:89] found id: ""
	I1217 02:06:44.199596 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.199605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:44.199612 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:44.199669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:44.227289 1498704 cri.go:89] found id: ""
	I1217 02:06:44.227313 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.227323 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:44.227329 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:44.227418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:44.255509 1498704 cri.go:89] found id: ""
	I1217 02:06:44.255549 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.255558 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:44.255564 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:44.255622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:44.282827 1498704 cri.go:89] found id: ""
	I1217 02:06:44.282850 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.282858 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:44.282864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:44.282971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:44.310331 1498704 cri.go:89] found id: ""
	I1217 02:06:44.310354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.310363 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:44.310370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:44.310427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:44.334927 1498704 cri.go:89] found id: ""
	I1217 02:06:44.334952 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.334961 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:44.334968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:44.335068 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:44.359119 1498704 cri.go:89] found id: ""
	I1217 02:06:44.359144 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.359153 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:44.359162 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:44.359192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:44.436966 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:44.436987 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:44.437000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:44.462649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:44.462686 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.492091 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:44.492120 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:44.548670 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:44.548707 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.063448 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:47.073962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:47.074076 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:47.100530 1498704 cri.go:89] found id: ""
	I1217 02:06:47.100565 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.100574 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:47.100580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:47.100656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:47.126541 1498704 cri.go:89] found id: ""
	I1217 02:06:47.126573 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.126582 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:47.126589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:47.126657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:47.155783 1498704 cri.go:89] found id: ""
	I1217 02:06:47.155807 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.155816 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:47.155822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:47.155887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:47.183519 1498704 cri.go:89] found id: ""
	I1217 02:06:47.183547 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.183556 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:47.183562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:47.183640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:47.207004 1498704 cri.go:89] found id: ""
	I1217 02:06:47.207029 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.207038 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:47.207044 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:47.207107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:47.236132 1498704 cri.go:89] found id: ""
	I1217 02:06:47.236157 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.236166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:47.236173 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:47.236237 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:47.262428 1498704 cri.go:89] found id: ""
	I1217 02:06:47.262452 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.262460 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:47.262470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:47.262526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:47.291039 1498704 cri.go:89] found id: ""
	I1217 02:06:47.291113 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.291127 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:47.291137 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:47.291154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:47.348423 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:47.348457 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.362973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:47.363001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:47.446529 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:47.446602 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:47.446619 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:47.471848 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:47.471885 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:48.135178 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:50.635159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:50.002430 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:50.016670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:50.016759 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:50.048092 1498704 cri.go:89] found id: ""
	I1217 02:06:50.048116 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.048126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:50.048132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:50.048193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:50.077981 1498704 cri.go:89] found id: ""
	I1217 02:06:50.078006 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.078016 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:50.078023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:50.078084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:50.104799 1498704 cri.go:89] found id: ""
	I1217 02:06:50.104824 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.104833 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:50.104839 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:50.104899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:50.134987 1498704 cri.go:89] found id: ""
	I1217 02:06:50.135010 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.135019 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:50.135025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:50.135088 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:50.163663 1498704 cri.go:89] found id: ""
	I1217 02:06:50.163689 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.163698 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:50.163704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:50.163771 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:50.189331 1498704 cri.go:89] found id: ""
	I1217 02:06:50.189354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.189362 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:50.189369 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:50.189435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:50.214491 1498704 cri.go:89] found id: ""
	I1217 02:06:50.214516 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.214525 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:50.214531 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:50.214590 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:50.238415 1498704 cri.go:89] found id: ""
	I1217 02:06:50.238442 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.238451 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:50.238460 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:50.238472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.269776 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:50.269804 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:50.327018 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:50.327055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:50.341848 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:50.341876 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:50.424429 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:50.424452 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:50.424466 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:52.635229 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:54.635273 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:52.954006 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:52.964727 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:52.964802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:52.989789 1498704 cri.go:89] found id: ""
	I1217 02:06:52.989810 1498704 logs.go:282] 0 containers: []
	W1217 02:06:52.989819 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:52.989826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:52.989887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:53.015439 1498704 cri.go:89] found id: ""
	I1217 02:06:53.015467 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.015476 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:53.015482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:53.015592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:53.040841 1498704 cri.go:89] found id: ""
	I1217 02:06:53.040865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.040875 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:53.040882 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:53.040942 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:53.066349 1498704 cri.go:89] found id: ""
	I1217 02:06:53.066374 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.066383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:53.066389 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:53.066451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:53.091390 1498704 cri.go:89] found id: ""
	I1217 02:06:53.091415 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.091424 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:53.091430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:53.091490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:53.117556 1498704 cri.go:89] found id: ""
	I1217 02:06:53.117581 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.117590 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:53.117597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:53.117683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:53.142385 1498704 cri.go:89] found id: ""
	I1217 02:06:53.142411 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.142421 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:53.142428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:53.142487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:53.167326 1498704 cri.go:89] found id: ""
	I1217 02:06:53.167351 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.167360 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:53.167370 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:53.167410 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:53.169580 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:06:53.227048 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:53.227133 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:53.263335 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:53.263474 1498704 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:06:53.263485 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:53.263548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:53.331925 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:53.331956 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:53.331970 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:53.358423 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:53.358461 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:55.889770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:55.902670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:55.902755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:55.931695 1498704 cri.go:89] found id: ""
	I1217 02:06:55.931717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.931726 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:55.931732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:55.931792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:55.957876 1498704 cri.go:89] found id: ""
	I1217 02:06:55.957898 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.957906 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:55.957913 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:55.957971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:55.985470 1498704 cri.go:89] found id: ""
	I1217 02:06:55.985494 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.985503 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:55.985510 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:55.985569 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:56.012853 1498704 cri.go:89] found id: ""
	I1217 02:06:56.012876 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.012885 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:56.012892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:56.012953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:56.038869 1498704 cri.go:89] found id: ""
	I1217 02:06:56.038896 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.038906 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:56.038912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:56.038974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:56.063896 1498704 cri.go:89] found id: ""
	I1217 02:06:56.063922 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.063931 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:56.063938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:56.063998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:56.094167 1498704 cri.go:89] found id: ""
	I1217 02:06:56.094194 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.094202 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:56.094209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:56.094317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:56.119180 1498704 cri.go:89] found id: ""
	I1217 02:06:56.119203 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.119211 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:56.119220 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:56.119233 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:56.145717 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:56.145755 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:56.174733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:56.174764 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:56.231996 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:56.232031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:56.246270 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:56.246298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:56.310523 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:58.810773 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:58.820984 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:58.821052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:58.844690 1498704 cri.go:89] found id: ""
	I1217 02:06:58.844713 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.844723 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:58.844729 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:58.844789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:58.869040 1498704 cri.go:89] found id: ""
	I1217 02:06:58.869065 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.869074 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:58.869081 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:58.869141 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:58.897937 1498704 cri.go:89] found id: ""
	I1217 02:06:58.897965 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.897974 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:58.897981 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:58.898046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:58.936181 1498704 cri.go:89] found id: ""
	I1217 02:06:58.936206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.936216 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:58.936222 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:58.936284 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:58.961870 1498704 cri.go:89] found id: ""
	I1217 02:06:58.961894 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.961902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:58.961908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:58.961973 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:58.987453 1498704 cri.go:89] found id: ""
	I1217 02:06:58.987476 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.987485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:58.987492 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:58.987589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:59.014256 1498704 cri.go:89] found id: ""
	I1217 02:06:59.014281 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.014290 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:59.014296 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:59.014356 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:59.043181 1498704 cri.go:89] found id: ""
	I1217 02:06:59.043206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.043214 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:59.043224 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:59.043265 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:59.069988 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:59.070014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:59.126583 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:59.126616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:59.143769 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:59.143858 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:59.206336 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:59.206357 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:59.206368 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:59.467894 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:59.526704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.526801 1498704 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:00.964501 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:01.024877 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:01.024990 1498704 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:01.030055 1498704 out.go:179] * Enabled addons: 
	W1217 02:06:57.134604 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:59.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:01.032983 1498704 addons.go:530] duration metric: took 1m40.577449503s for enable addons: enabled=[]
	I1217 02:07:01.732628 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:01.743041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:01.743116 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:01.767462 1498704 cri.go:89] found id: ""
	I1217 02:07:01.767488 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.767497 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:01.767503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:01.767602 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:01.793082 1498704 cri.go:89] found id: ""
	I1217 02:07:01.793104 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.793112 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:01.793119 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:01.793179 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:01.819716 1498704 cri.go:89] found id: ""
	I1217 02:07:01.819740 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.819749 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:01.819755 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:01.819815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:01.847485 1498704 cri.go:89] found id: ""
	I1217 02:07:01.847556 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.847572 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:01.847580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:01.847641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:01.875985 1498704 cri.go:89] found id: ""
	I1217 02:07:01.876062 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.876084 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:01.876103 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:01.876193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:01.910714 1498704 cri.go:89] found id: ""
	I1217 02:07:01.910739 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.910748 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:01.910754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:01.910813 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:01.937846 1498704 cri.go:89] found id: ""
	I1217 02:07:01.937871 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.937880 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:01.937886 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:01.937945 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:01.964067 1498704 cri.go:89] found id: ""
	I1217 02:07:01.964091 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.964100 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:01.964114 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:01.964126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:02.028700 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:02.028724 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:02.028739 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:02.054141 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:02.054180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:02.082544 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:02.082570 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:02.139516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:02.139555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:07:01.635378 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:04.134753 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:06.135163 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:04.654404 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:04.665750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:04.665823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:04.692548 1498704 cri.go:89] found id: ""
	I1217 02:07:04.692573 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.692582 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:04.692589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:04.692649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:04.716945 1498704 cri.go:89] found id: ""
	I1217 02:07:04.716971 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.716980 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:04.716986 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:04.717050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:04.741853 1498704 cri.go:89] found id: ""
	I1217 02:07:04.741919 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.741943 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:04.741956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:04.742029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:04.766368 1498704 cri.go:89] found id: ""
	I1217 02:07:04.766432 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.766456 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:04.766471 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:04.766543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:04.791787 1498704 cri.go:89] found id: ""
	I1217 02:07:04.791811 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.791819 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:04.791826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:04.791886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:04.817229 1498704 cri.go:89] found id: ""
	I1217 02:07:04.817255 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.817264 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:04.817271 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:04.817343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:04.841915 1498704 cri.go:89] found id: ""
	I1217 02:07:04.841938 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.841947 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:04.841953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:04.842013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:04.866862 1498704 cri.go:89] found id: ""
	I1217 02:07:04.866889 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.866898 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:04.866908 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:04.866920 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:04.930507 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:04.930554 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.948025 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:04.948060 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:05.019651 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:05.019675 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:05.019688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:05.046001 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:05.046036 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.578495 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:07.591153 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:07.591225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:07.621427 1498704 cri.go:89] found id: ""
	I1217 02:07:07.621450 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.621459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:07.621466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:07.621526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:07.661892 1498704 cri.go:89] found id: ""
	I1217 02:07:07.661915 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.661923 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:07.661929 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:07.661995 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:07.695665 1498704 cri.go:89] found id: ""
	I1217 02:07:07.695693 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.695703 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:07.695709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:07.695775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:07.721278 1498704 cri.go:89] found id: ""
	I1217 02:07:07.721308 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.721316 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:07.721323 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:07.721381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:07.745368 1498704 cri.go:89] found id: ""
	I1217 02:07:07.745396 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.745404 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:07.745411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:07.745469 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:07.773994 1498704 cri.go:89] found id: ""
	I1217 02:07:07.774017 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.774025 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:07.774032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:07.774094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:07.799025 1498704 cri.go:89] found id: ""
	I1217 02:07:07.799049 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.799058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:07.799070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:07.799128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:07.824235 1498704 cri.go:89] found id: ""
	I1217 02:07:07.824261 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.824270 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:07.824278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:07.824290 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:07.839101 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:07.839129 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:08.135245 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:10.635146 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:07.923334 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:07.923360 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:07.923372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:07.949715 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:07.949754 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.977665 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:07.977690 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.537062 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:10.547797 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:10.547872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:10.572434 1498704 cri.go:89] found id: ""
	I1217 02:07:10.572462 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.572472 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:10.572479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:10.572560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:10.597486 1498704 cri.go:89] found id: ""
	I1217 02:07:10.597510 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.597519 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:10.597525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:10.597591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:10.627205 1498704 cri.go:89] found id: ""
	I1217 02:07:10.627227 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.627236 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:10.627241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:10.627316 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:10.661788 1498704 cri.go:89] found id: ""
	I1217 02:07:10.661815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.661825 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:10.661832 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:10.661892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:10.694378 1498704 cri.go:89] found id: ""
	I1217 02:07:10.694403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.694411 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:10.694417 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:10.694481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:10.719732 1498704 cri.go:89] found id: ""
	I1217 02:07:10.719759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.719768 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:10.719775 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:10.719834 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:10.746071 1498704 cri.go:89] found id: ""
	I1217 02:07:10.746141 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.746169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:10.746181 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:10.746257 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:10.771251 1498704 cri.go:89] found id: ""
	I1217 02:07:10.771324 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.771339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:10.771349 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:10.771363 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:10.797277 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:10.797316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:10.824227 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:10.824255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.883648 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:10.883685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:10.899500 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:10.899545 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:10.971848 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:13.135257 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:15.635347 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:13.472155 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:13.482654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:13.482730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:13.511840 1498704 cri.go:89] found id: ""
	I1217 02:07:13.511865 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.511874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:13.511880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:13.511938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:13.539314 1498704 cri.go:89] found id: ""
	I1217 02:07:13.539340 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.539349 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:13.539355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:13.539418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:13.564523 1498704 cri.go:89] found id: ""
	I1217 02:07:13.564595 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.564616 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:13.564635 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:13.564722 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:13.588672 1498704 cri.go:89] found id: ""
	I1217 02:07:13.588696 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.588705 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:13.588711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:13.588769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:13.613292 1498704 cri.go:89] found id: ""
	I1217 02:07:13.613370 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.613394 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:13.613413 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:13.613497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:13.640379 1498704 cri.go:89] found id: ""
	I1217 02:07:13.640401 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.640467 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:13.640475 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:13.640596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:13.670823 1498704 cri.go:89] found id: ""
	I1217 02:07:13.670897 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.670909 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:13.670915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:13.671033 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:13.697928 1498704 cri.go:89] found id: ""
	I1217 02:07:13.697954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.697963 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:13.697973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:13.697991 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:13.764081 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.764103 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:13.764117 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:13.789698 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:13.789735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:13.817458 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:13.817528 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:13.873570 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:13.873604 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.390490 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:16.400824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:16.400892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:16.433284 1498704 cri.go:89] found id: ""
	I1217 02:07:16.433306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.433315 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:16.433321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:16.433382 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:16.459029 1498704 cri.go:89] found id: ""
	I1217 02:07:16.459051 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.459059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:16.459065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:16.459123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:16.482532 1498704 cri.go:89] found id: ""
	I1217 02:07:16.482559 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.482568 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:16.482574 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:16.482635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:16.508099 1498704 cri.go:89] found id: ""
	I1217 02:07:16.508126 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.508135 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:16.508141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:16.508198 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:16.537293 1498704 cri.go:89] found id: ""
	I1217 02:07:16.537327 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.537336 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:16.537343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:16.537422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:16.561736 1498704 cri.go:89] found id: ""
	I1217 02:07:16.561761 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.561769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:16.561776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:16.561841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:16.588020 1498704 cri.go:89] found id: ""
	I1217 02:07:16.588054 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.588063 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:16.588069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:16.588136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:16.614951 1498704 cri.go:89] found id: ""
	I1217 02:07:16.614983 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.614993 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:16.615018 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:16.615035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:16.674706 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:16.674738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.693871 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:16.694008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:16.761779 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:16.761800 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:16.761813 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:16.788228 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:16.788270 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:18.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:20.135199 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:19.320399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:19.330773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:19.330845 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:19.354921 1498704 cri.go:89] found id: ""
	I1217 02:07:19.354990 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.355015 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:19.355028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:19.355100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:19.383572 1498704 cri.go:89] found id: ""
	I1217 02:07:19.383648 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.383662 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:19.383670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:19.383735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:19.412179 1498704 cri.go:89] found id: ""
	I1217 02:07:19.412204 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.412213 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:19.412229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:19.412290 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:19.437924 1498704 cri.go:89] found id: ""
	I1217 02:07:19.437950 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.437959 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:19.437966 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:19.438057 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:19.462416 1498704 cri.go:89] found id: ""
	I1217 02:07:19.462483 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.462507 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:19.462528 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:19.462618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:19.486955 1498704 cri.go:89] found id: ""
	I1217 02:07:19.487022 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.487047 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:19.487061 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:19.487133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:19.517143 1498704 cri.go:89] found id: ""
	I1217 02:07:19.517170 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.517178 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:19.517185 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:19.517245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:19.541419 1498704 cri.go:89] found id: ""
	I1217 02:07:19.541443 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.541452 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:19.541462 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:19.541474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:19.600586 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:19.600621 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:19.615645 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:19.615673 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:19.700496 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:19.700518 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:19.700531 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:19.725860 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:19.725896 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:22.254753 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:22.266831 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:22.266902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:22.291227 1498704 cri.go:89] found id: ""
	I1217 02:07:22.291306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.291329 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:22.291344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:22.291421 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:22.317812 1498704 cri.go:89] found id: ""
	I1217 02:07:22.317835 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.317844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:22.317850 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:22.317929 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:22.341950 1498704 cri.go:89] found id: ""
	I1217 02:07:22.341973 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.341982 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:22.341991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:22.342074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:22.368217 1498704 cri.go:89] found id: ""
	I1217 02:07:22.368291 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.368330 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:22.368350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:22.368435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:22.396888 1498704 cri.go:89] found id: ""
	I1217 02:07:22.396911 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.396920 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:22.396926 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:22.396987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:22.420964 1498704 cri.go:89] found id: ""
	I1217 02:07:22.421040 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.421064 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:22.421083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:22.421163 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:22.446890 1498704 cri.go:89] found id: ""
	I1217 02:07:22.446954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.446980 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:22.447002 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:22.447067 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:22.475922 1498704 cri.go:89] found id: ""
	I1217 02:07:22.475949 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.475959 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:22.475968 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:22.475980 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:22.532457 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:22.532490 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:22.546823 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:22.546900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:22.612059 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:22.612089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:22.612102 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:22.642268 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:22.642325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:22.635112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:25.134718 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:25.182933 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:25.194033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:25.194115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:25.218403 1498704 cri.go:89] found id: ""
	I1217 02:07:25.218426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.218434 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:25.218441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:25.218500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:25.247233 1498704 cri.go:89] found id: ""
	I1217 02:07:25.247257 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.247267 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:25.247272 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:25.247337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:25.271255 1498704 cri.go:89] found id: ""
	I1217 02:07:25.271278 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.271286 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:25.271292 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:25.271354 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:25.295129 1498704 cri.go:89] found id: ""
	I1217 02:07:25.295152 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.295161 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:25.295167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:25.295232 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:25.323735 1498704 cri.go:89] found id: ""
	I1217 02:07:25.323802 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.323818 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:25.323826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:25.323895 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:25.348083 1498704 cri.go:89] found id: ""
	I1217 02:07:25.348107 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.348116 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:25.348123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:25.348187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:25.375945 1498704 cri.go:89] found id: ""
	I1217 02:07:25.375967 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.375976 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:25.375982 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:25.376046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:25.404167 1498704 cri.go:89] found id: ""
	I1217 02:07:25.404190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.404199 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:25.404207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:25.404219 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.432830 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:25.432905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:25.491437 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:25.491472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:25.506773 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:25.506811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:25.571857 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:25.571879 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:25.571891 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:07:27.634506 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:29.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:28.097148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:28.109420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:28.109492 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:28.147274 1498704 cri.go:89] found id: ""
	I1217 02:07:28.147301 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.147310 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:28.147317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:28.147375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:28.182487 1498704 cri.go:89] found id: ""
	I1217 02:07:28.182520 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.182529 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:28.182535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:28.182605 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:28.210414 1498704 cri.go:89] found id: ""
	I1217 02:07:28.210492 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.210506 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:28.210513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:28.210596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:28.236032 1498704 cri.go:89] found id: ""
	I1217 02:07:28.236067 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.236076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:28.236100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:28.236187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:28.261848 1498704 cri.go:89] found id: ""
	I1217 02:07:28.261925 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.261949 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:28.261961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:28.262023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:28.287575 1498704 cri.go:89] found id: ""
	I1217 02:07:28.287642 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.287667 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:28.287681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:28.287753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:28.311909 1498704 cri.go:89] found id: ""
	I1217 02:07:28.311942 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.311950 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:28.311974 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:28.312055 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:28.338978 1498704 cri.go:89] found id: ""
	I1217 02:07:28.338999 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.339013 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:28.339041 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:28.339059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:28.395245 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:28.395283 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:28.410155 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:28.410183 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:28.473762 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:28.473783 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:28.473807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.499695 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:28.499728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.034443 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:31.045062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:31.045138 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:31.071798 1498704 cri.go:89] found id: ""
	I1217 02:07:31.071825 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.071835 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:31.071842 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:31.071912 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:31.102760 1498704 cri.go:89] found id: ""
	I1217 02:07:31.102787 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.102795 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:31.102802 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:31.102866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:31.141278 1498704 cri.go:89] found id: ""
	I1217 02:07:31.141303 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.141313 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:31.141320 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:31.141385 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:31.171560 1498704 cri.go:89] found id: ""
	I1217 02:07:31.171590 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.171599 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:31.171606 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:31.171671 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:31.198647 1498704 cri.go:89] found id: ""
	I1217 02:07:31.198713 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.198736 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:31.198749 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:31.198822 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:31.223451 1498704 cri.go:89] found id: ""
	I1217 02:07:31.223534 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.223560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:31.223580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:31.223660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:31.253387 1498704 cri.go:89] found id: ""
	I1217 02:07:31.253413 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.253422 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:31.253428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:31.253487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:31.278792 1498704 cri.go:89] found id: ""
	I1217 02:07:31.278815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.278823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:31.278832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:31.278843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:31.303758 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:31.303790 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.332180 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:31.332251 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:31.388186 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:31.388222 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:31.402632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:31.402661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:31.464007 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:32.134594 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:34.135393 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:33.964236 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:33.974724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:33.974801 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:33.997812 1498704 cri.go:89] found id: ""
	I1217 02:07:33.997833 1498704 logs.go:282] 0 containers: []
	W1217 02:07:33.997841 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:33.997847 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:33.997918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:34.028229 1498704 cri.go:89] found id: ""
	I1217 02:07:34.028256 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.028265 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:34.028273 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:34.028333 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:34.053400 1498704 cri.go:89] found id: ""
	I1217 02:07:34.053426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.053437 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:34.053444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:34.053504 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:34.079351 1498704 cri.go:89] found id: ""
	I1217 02:07:34.079419 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.079433 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:34.079441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:34.079499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:34.106192 1498704 cri.go:89] found id: ""
	I1217 02:07:34.106228 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.106237 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:34.106244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:34.106315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:34.147697 1498704 cri.go:89] found id: ""
	I1217 02:07:34.147759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.147785 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:34.147810 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:34.147890 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:34.176177 1498704 cri.go:89] found id: ""
	I1217 02:07:34.176244 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.176268 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:34.176288 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:34.176365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:34.205945 1498704 cri.go:89] found id: ""
	I1217 02:07:34.206007 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.206035 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:34.206056 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:34.206081 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:34.262276 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:34.262309 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:34.276944 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:34.276971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:34.338908 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:34.338934 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:34.338947 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:34.363617 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:34.363647 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:36.891296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:36.902860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:36.902927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:36.930707 1498704 cri.go:89] found id: ""
	I1217 02:07:36.930733 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.930747 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:36.930754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:36.930811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:36.955573 1498704 cri.go:89] found id: ""
	I1217 02:07:36.955597 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.955605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:36.955611 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:36.955668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:36.980409 1498704 cri.go:89] found id: ""
	I1217 02:07:36.980434 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.980444 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:36.980450 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:36.980508 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:37.009442 1498704 cri.go:89] found id: ""
	I1217 02:07:37.009467 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.009477 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:37.009484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:37.009551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:37.037149 1498704 cri.go:89] found id: ""
	I1217 02:07:37.037171 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.037180 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:37.037186 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:37.037250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:37.061767 1498704 cri.go:89] found id: ""
	I1217 02:07:37.061792 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.061801 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:37.061818 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:37.061889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:37.085968 1498704 cri.go:89] found id: ""
	I1217 02:07:37.085993 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.086003 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:37.086009 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:37.086074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:37.115273 1498704 cri.go:89] found id: ""
	I1217 02:07:37.115295 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.115303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:37.115312 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:37.115323 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:37.173190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:37.173223 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:37.190802 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:37.190834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:37.258464 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:37.258486 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:37.258498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:37.283631 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:37.283665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:36.635067 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:38.635141 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:40.635215 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:39.816914 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:39.827386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:39.827463 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:39.852104 1498704 cri.go:89] found id: ""
	I1217 02:07:39.852129 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.852139 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:39.852145 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:39.852204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:39.892785 1498704 cri.go:89] found id: ""
	I1217 02:07:39.892806 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.892815 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:39.892822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:39.892887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:39.923500 1498704 cri.go:89] found id: ""
	I1217 02:07:39.923530 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.923538 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:39.923544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:39.923603 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:39.949968 1498704 cri.go:89] found id: ""
	I1217 02:07:39.949995 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.950004 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:39.950010 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:39.950071 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:39.974479 1498704 cri.go:89] found id: ""
	I1217 02:07:39.974500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.974508 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:39.974515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:39.974572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:40.015259 1498704 cri.go:89] found id: ""
	I1217 02:07:40.015286 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.015296 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:40.015303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:40.015375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:40.045029 1498704 cri.go:89] found id: ""
	I1217 02:07:40.045055 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.045064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:40.045071 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:40.045135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:40.072784 1498704 cri.go:89] found id: ""
	I1217 02:07:40.072818 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.072833 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:40.072843 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:40.072860 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:40.153737 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:40.153765 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:40.153780 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:40.189498 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:40.189552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:40.222768 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:40.222844 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:40.279190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:40.279224 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:42.796231 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:42.806670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:42.806738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:42.830230 1498704 cri.go:89] found id: ""
	I1217 02:07:42.830250 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.830258 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:42.830265 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:42.830323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1217 02:07:43.135159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:45.135226 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:42.855478 1498704 cri.go:89] found id: ""
	I1217 02:07:42.855500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.855509 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:42.855515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:42.855580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:42.894494 1498704 cri.go:89] found id: ""
	I1217 02:07:42.894522 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.894530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:42.894536 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:42.894593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:42.921324 1498704 cri.go:89] found id: ""
	I1217 02:07:42.921350 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.921359 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:42.921365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:42.921435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:42.953266 1498704 cri.go:89] found id: ""
	I1217 02:07:42.953290 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.953299 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:42.953305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:42.953366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:42.977816 1498704 cri.go:89] found id: ""
	I1217 02:07:42.977841 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.977850 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:42.977856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:42.977917 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:43.003747 1498704 cri.go:89] found id: ""
	I1217 02:07:43.003839 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.003865 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:43.003880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:43.003963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:43.029772 1498704 cri.go:89] found id: ""
	I1217 02:07:43.029797 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.029806 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:43.029816 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:43.029828 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:43.055443 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:43.055476 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:43.084076 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:43.084104 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:43.145546 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:43.145607 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:43.161920 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:43.161999 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:43.231831 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:45.733506 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:45.744340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:45.744408 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:45.769934 1498704 cri.go:89] found id: ""
	I1217 02:07:45.769957 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.769965 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:45.769971 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:45.770034 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:45.795238 1498704 cri.go:89] found id: ""
	I1217 02:07:45.795263 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.795272 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:45.795279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:45.795343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:45.821898 1498704 cri.go:89] found id: ""
	I1217 02:07:45.821922 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.821930 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:45.821937 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:45.821999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:45.847109 1498704 cri.go:89] found id: ""
	I1217 02:07:45.847132 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.847140 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:45.847146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:45.847208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:45.880160 1498704 cri.go:89] found id: ""
	I1217 02:07:45.880190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.880199 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:45.880205 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:45.880271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:45.910818 1498704 cri.go:89] found id: ""
	I1217 02:07:45.910850 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.910859 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:45.910866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:45.910927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:45.939378 1498704 cri.go:89] found id: ""
	I1217 02:07:45.939403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.939413 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:45.939419 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:45.939480 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:45.966395 1498704 cri.go:89] found id: ""
	I1217 02:07:45.966421 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.966430 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:45.966440 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:45.966479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:45.981177 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:45.981203 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:46.055154 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:46.055186 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:46.055204 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:46.081781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:46.081822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:46.110247 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:46.110271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:07:47.635175 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:50.134634 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:48.673749 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:48.684117 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:48.684190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:48.710141 1498704 cri.go:89] found id: ""
	I1217 02:07:48.710163 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.710171 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:48.710177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:48.710242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:48.735609 1498704 cri.go:89] found id: ""
	I1217 02:07:48.735631 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.735639 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:48.735648 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:48.735707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:48.760494 1498704 cri.go:89] found id: ""
	I1217 02:07:48.760517 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.760525 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:48.760532 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:48.760592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:48.786553 1498704 cri.go:89] found id: ""
	I1217 02:07:48.786574 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.786582 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:48.786588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:48.786645 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:48.815529 1498704 cri.go:89] found id: ""
	I1217 02:07:48.815551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.815560 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:48.815566 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:48.815623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:48.839528 1498704 cri.go:89] found id: ""
	I1217 02:07:48.839551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.839560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:48.839567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:48.839649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:48.870240 1498704 cri.go:89] found id: ""
	I1217 02:07:48.870266 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.870275 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:48.870282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:48.870363 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:48.906712 1498704 cri.go:89] found id: ""
	I1217 02:07:48.906736 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.906746 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:48.906756 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:48.906786 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:48.934786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:48.934865 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:48.964758 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:48.964785 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:49.022291 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:49.022326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:49.036990 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:49.037025 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:49.101921 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.602715 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.614088 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:51.614167 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:51.640614 1498704 cri.go:89] found id: ""
	I1217 02:07:51.640639 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.640648 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:51.640655 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:51.640716 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:51.665595 1498704 cri.go:89] found id: ""
	I1217 02:07:51.665622 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.665631 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:51.665637 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:51.665727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:51.690508 1498704 cri.go:89] found id: ""
	I1217 02:07:51.690532 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.690541 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:51.690547 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:51.690627 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:51.717537 1498704 cri.go:89] found id: ""
	I1217 02:07:51.717561 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.717570 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:51.717577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:51.717638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:51.742073 1498704 cri.go:89] found id: ""
	I1217 02:07:51.742095 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.742104 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:51.742110 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:51.742169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:51.768165 1498704 cri.go:89] found id: ""
	I1217 02:07:51.768188 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.768234 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:51.768255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:51.768322 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:51.793095 1498704 cri.go:89] found id: ""
	I1217 02:07:51.793118 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.793127 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:51.793133 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:51.793195 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:51.817679 1498704 cri.go:89] found id: ""
	I1217 02:07:51.817701 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.817710 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:51.817720 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:51.817730 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:51.874453 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:51.874486 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:51.890393 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:51.890418 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:51.966182 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.966201 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:51.966214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:51.992382 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:51.992417 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:52.135139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:54.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:54.525060 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:54.535685 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:54.535760 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:54.563912 1498704 cri.go:89] found id: ""
	I1217 02:07:54.563935 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.563944 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:54.563950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:54.564011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:54.588995 1498704 cri.go:89] found id: ""
	I1217 02:07:54.589020 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.589031 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:54.589038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:54.589101 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:54.615173 1498704 cri.go:89] found id: ""
	I1217 02:07:54.615198 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.615207 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:54.615214 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:54.615277 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:54.640498 1498704 cri.go:89] found id: ""
	I1217 02:07:54.640523 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.640532 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:54.640539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:54.640623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:54.666201 1498704 cri.go:89] found id: ""
	I1217 02:07:54.666226 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.666234 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:54.666241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:54.666303 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:54.690876 1498704 cri.go:89] found id: ""
	I1217 02:07:54.690899 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.690908 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:54.690915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:54.690974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:54.714932 1498704 cri.go:89] found id: ""
	I1217 02:07:54.715000 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.715024 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:54.715043 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:54.715133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:54.739880 1498704 cri.go:89] found id: ""
	I1217 02:07:54.739906 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.739926 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:54.739952 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:54.739978 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:54.804035 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:54.804056 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:54.804070 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:54.829994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:54.830030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.858611 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:54.858639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:54.921120 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:54.921196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.438546 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.448669 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:57.448736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:57.475324 1498704 cri.go:89] found id: ""
	I1217 02:07:57.475346 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.475355 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:57.475362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:57.475419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:57.505098 1498704 cri.go:89] found id: ""
	I1217 02:07:57.505123 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.505131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:57.505137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:57.505196 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:57.529496 1498704 cri.go:89] found id: ""
	I1217 02:07:57.529519 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.529529 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:57.529535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:57.529601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:57.560154 1498704 cri.go:89] found id: ""
	I1217 02:07:57.560179 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.560188 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:57.560194 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:57.560256 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:57.584872 1498704 cri.go:89] found id: ""
	I1217 02:07:57.584898 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.584912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:57.584919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:57.584976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:57.611897 1498704 cri.go:89] found id: ""
	I1217 02:07:57.611930 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.611938 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:57.611945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:57.612004 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:57.636969 1498704 cri.go:89] found id: ""
	I1217 02:07:57.636991 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.636999 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:57.637006 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:57.637069 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:57.661285 1498704 cri.go:89] found id: ""
	I1217 02:07:57.661312 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.661320 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:57.661329 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:57.661340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:57.717030 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:57.717066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.732556 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:57.732588 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:57.802383 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:57.802403 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:57.802414 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:57.831640 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:57.831729 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:56.634914 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:58.635189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:01.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:00.359786 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.375104 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:00.375194 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:00.418191 1498704 cri.go:89] found id: ""
	I1217 02:08:00.418222 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.418232 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:00.418239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:00.418315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:00.456739 1498704 cri.go:89] found id: ""
	I1217 02:08:00.456766 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.456775 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:00.456782 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:00.456850 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:00.488069 1498704 cri.go:89] found id: ""
	I1217 02:08:00.488097 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.488106 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:00.488115 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:00.488180 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:00.522338 1498704 cri.go:89] found id: ""
	I1217 02:08:00.522369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.522383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:00.522391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:00.522477 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:00.552999 1498704 cri.go:89] found id: ""
	I1217 02:08:00.553026 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.553035 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:00.553041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:00.553105 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:00.579678 1498704 cri.go:89] found id: ""
	I1217 02:08:00.579710 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.579719 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:00.579725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:00.579787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:00.605680 1498704 cri.go:89] found id: ""
	I1217 02:08:00.605708 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.605717 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:00.605724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:00.605787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:00.632147 1498704 cri.go:89] found id: ""
	I1217 02:08:00.632172 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.632181 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:00.632191 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:00.632202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:00.658405 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:00.658442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.687017 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:00.687042 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:00.743960 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:00.743997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:00.758928 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:00.758957 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:00.826075 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:08:03.634990 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:05.635168 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:03.326352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.337106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:03.337176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:03.362079 1498704 cri.go:89] found id: ""
	I1217 02:08:03.362103 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.362112 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:03.362120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:03.362185 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:03.406055 1498704 cri.go:89] found id: ""
	I1217 02:08:03.406078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.406086 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:03.406092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:03.406153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:03.469689 1498704 cri.go:89] found id: ""
	I1217 02:08:03.469719 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.469728 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:03.469734 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:03.469795 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:03.495363 1498704 cri.go:89] found id: ""
	I1217 02:08:03.495388 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.495397 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:03.495403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:03.495462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:03.520987 1498704 cri.go:89] found id: ""
	I1217 02:08:03.521020 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.521029 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:03.521035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:03.521104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:03.546993 1498704 cri.go:89] found id: ""
	I1217 02:08:03.547070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.547086 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:03.547094 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:03.547157 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:03.572356 1498704 cri.go:89] found id: ""
	I1217 02:08:03.572381 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.572390 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:03.572396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:03.572465 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:03.601007 1498704 cri.go:89] found id: ""
	I1217 02:08:03.601039 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.601048 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:03.601058 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:03.601069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:03.626163 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:03.626198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:03.653854 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:03.653882 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:03.711530 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:03.711566 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:03.726308 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:03.726377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:03.794467 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.296166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.306860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:06.306931 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:06.335081 1498704 cri.go:89] found id: ""
	I1217 02:08:06.335118 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.335128 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:06.335140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:06.335216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:06.360315 1498704 cri.go:89] found id: ""
	I1217 02:08:06.360337 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.360346 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:06.360353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:06.360416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:06.438162 1498704 cri.go:89] found id: ""
	I1217 02:08:06.438184 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.438193 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:06.438201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:06.438260 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:06.473712 1498704 cri.go:89] found id: ""
	I1217 02:08:06.473739 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.473750 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:06.473757 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:06.473821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:06.501185 1498704 cri.go:89] found id: ""
	I1217 02:08:06.501213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.501223 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:06.501229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:06.501291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:06.527618 1498704 cri.go:89] found id: ""
	I1217 02:08:06.527642 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.527650 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:06.527657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:06.527723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:06.551855 1498704 cri.go:89] found id: ""
	I1217 02:08:06.551882 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.551892 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:06.551899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:06.551982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:06.577516 1498704 cri.go:89] found id: ""
	I1217 02:08:06.577547 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.577556 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:06.577566 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:06.577577 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:06.592728 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:06.592762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:06.660537 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.660559 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:06.660572 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:06.685272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:06.685307 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:06.716733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:06.716761 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:07.635213 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:10.134640 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:09.274376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.285055 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:09.285129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:09.310445 1498704 cri.go:89] found id: ""
	I1217 02:08:09.310468 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.310477 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:09.310483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:09.310551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:09.339399 1498704 cri.go:89] found id: ""
	I1217 02:08:09.339434 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.339443 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:09.339449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:09.339539 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:09.364792 1498704 cri.go:89] found id: ""
	I1217 02:08:09.364830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.364843 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:09.364851 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:09.364921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:09.398786 1498704 cri.go:89] found id: ""
	I1217 02:08:09.398813 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.398822 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:09.398829 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:09.398898 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:09.437605 1498704 cri.go:89] found id: ""
	I1217 02:08:09.437633 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.437670 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:09.437696 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:09.437778 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:09.469389 1498704 cri.go:89] found id: ""
	I1217 02:08:09.469430 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.469439 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:09.469446 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:09.469557 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:09.501822 1498704 cri.go:89] found id: ""
	I1217 02:08:09.501847 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.501856 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:09.501873 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:09.501953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:09.526536 1498704 cri.go:89] found id: ""
	I1217 02:08:09.526604 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.526627 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:09.526649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:09.526685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:09.553800 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:09.553829 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.611333 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:09.611367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:09.626057 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:09.626083 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:09.690274 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:09.690296 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:09.690308 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.216656 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.226983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:12.227094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:12.251590 1498704 cri.go:89] found id: ""
	I1217 02:08:12.251613 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.251622 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:12.251628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:12.251686 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:12.276257 1498704 cri.go:89] found id: ""
	I1217 02:08:12.276285 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.276293 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:12.276308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:12.276365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:12.300603 1498704 cri.go:89] found id: ""
	I1217 02:08:12.300628 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.300637 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:12.300643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:12.300704 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:12.328528 1498704 cri.go:89] found id: ""
	I1217 02:08:12.328552 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.328561 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:12.328571 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:12.328629 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:12.353931 1498704 cri.go:89] found id: ""
	I1217 02:08:12.353954 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.353963 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:12.353969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:12.354031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:12.426173 1498704 cri.go:89] found id: ""
	I1217 02:08:12.426238 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.426263 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:12.426283 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:12.426375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:12.463406 1498704 cri.go:89] found id: ""
	I1217 02:08:12.463432 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.463441 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:12.463447 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:12.463511 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:12.491432 1498704 cri.go:89] found id: ""
	I1217 02:08:12.491457 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.491466 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:12.491476 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:12.491487 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:12.549942 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:12.549979 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:12.566124 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:12.566160 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:12.632809 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:12.632878 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:12.632899 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.657969 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:12.658007 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:12.635367 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:14.635409 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:15.189789 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.200614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:15.200684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:15.224844 1498704 cri.go:89] found id: ""
	I1217 02:08:15.224865 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.224874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:15.224880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:15.224939 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:15.253351 1498704 cri.go:89] found id: ""
	I1217 02:08:15.253417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.253441 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:15.253459 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:15.253547 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:15.278140 1498704 cri.go:89] found id: ""
	I1217 02:08:15.278216 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.278238 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:15.278257 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:15.278335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:15.303296 1498704 cri.go:89] found id: ""
	I1217 02:08:15.303325 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.303334 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:15.303340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:15.303399 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:15.332342 1498704 cri.go:89] found id: ""
	I1217 02:08:15.332369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.332379 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:15.332386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:15.332442 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:15.361393 1498704 cri.go:89] found id: ""
	I1217 02:08:15.361417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.361426 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:15.361432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:15.361501 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:15.399309 1498704 cri.go:89] found id: ""
	I1217 02:08:15.399335 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.399343 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:15.399350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:15.399409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:15.441743 1498704 cri.go:89] found id: ""
	I1217 02:08:15.441769 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.441778 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:15.441787 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:15.441799 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:15.508941 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:15.508977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:15.524099 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:15.524127 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:15.595333 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:15.595351 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:15.595367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:15.620921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:15.620958 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:17.135481 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:19.635228 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:18.151199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.162135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:18.162207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:18.190085 1498704 cri.go:89] found id: ""
	I1217 02:08:18.190108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.190116 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:18.190123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:18.190186 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:18.218906 1498704 cri.go:89] found id: ""
	I1217 02:08:18.218930 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.218938 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:18.218944 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:18.219002 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:18.242454 1498704 cri.go:89] found id: ""
	I1217 02:08:18.242476 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.242484 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:18.242490 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:18.242549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:18.267483 1498704 cri.go:89] found id: ""
	I1217 02:08:18.267505 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.267514 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:18.267527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:18.267587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:18.291870 1498704 cri.go:89] found id: ""
	I1217 02:08:18.291894 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.291902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:18.291909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:18.291970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:18.315514 1498704 cri.go:89] found id: ""
	I1217 02:08:18.315543 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.315551 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:18.315558 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:18.315617 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:18.338958 1498704 cri.go:89] found id: ""
	I1217 02:08:18.338980 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.338988 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:18.338995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:18.339052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:18.362300 1498704 cri.go:89] found id: ""
	I1217 02:08:18.362326 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.362339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:18.362349 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:18.362361 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:18.441796 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:18.441881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:18.465294 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:18.465318 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:18.527976 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:18.527999 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:18.528012 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:18.552941 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:18.552971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:21.080554 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.090872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:21.090951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:21.119427 1498704 cri.go:89] found id: ""
	I1217 02:08:21.119451 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.119459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:21.119466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:21.119531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:21.145488 1498704 cri.go:89] found id: ""
	I1217 02:08:21.145509 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.145517 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:21.145524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:21.145589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:21.171795 1498704 cri.go:89] found id: ""
	I1217 02:08:21.171822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.171830 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:21.171837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:21.171897 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:21.200041 1498704 cri.go:89] found id: ""
	I1217 02:08:21.200067 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.200076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:21.200083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:21.200144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:21.224266 1498704 cri.go:89] found id: ""
	I1217 02:08:21.224294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.224302 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:21.224310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:21.224374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:21.249832 1498704 cri.go:89] found id: ""
	I1217 02:08:21.249859 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.249868 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:21.249875 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:21.249934 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:21.276533 1498704 cri.go:89] found id: ""
	I1217 02:08:21.276556 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.276565 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:21.276577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:21.276638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:21.302869 1498704 cri.go:89] found id: ""
	I1217 02:08:21.302898 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.302906 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:21.302920 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:21.302932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:21.359571 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:21.359612 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:21.386971 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:21.387000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:21.481485 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:21.481511 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:21.481523 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:21.510229 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:21.510266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:22.134985 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:24.135180 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:26.135497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:24.042457 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.053742 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:24.053815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:24.079751 1498704 cri.go:89] found id: ""
	I1217 02:08:24.079777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.079793 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:24.079801 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:24.079863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:24.106268 1498704 cri.go:89] found id: ""
	I1217 02:08:24.106294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.106304 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:24.106310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:24.106372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:24.136105 1498704 cri.go:89] found id: ""
	I1217 02:08:24.136127 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.136141 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:24.136147 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:24.136208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:24.162676 1498704 cri.go:89] found id: ""
	I1217 02:08:24.162704 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.162713 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:24.162719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:24.162781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:24.186881 1498704 cri.go:89] found id: ""
	I1217 02:08:24.186909 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.186918 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:24.186924 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:24.186983 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:24.211784 1498704 cri.go:89] found id: ""
	I1217 02:08:24.211807 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.211816 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:24.211823 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:24.211883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:24.239768 1498704 cri.go:89] found id: ""
	I1217 02:08:24.239791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.239799 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:24.239806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:24.239863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:24.267746 1498704 cri.go:89] found id: ""
	I1217 02:08:24.267826 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.267843 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:24.267853 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:24.267864 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:24.292626 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:24.292661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.324726 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:24.324756 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:24.386142 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:24.386184 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:24.417577 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:24.417605 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:24.496974 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:26.997267 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.015470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:27.015561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:27.041572 1498704 cri.go:89] found id: ""
	I1217 02:08:27.041593 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.041601 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:27.041608 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:27.041697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:27.067860 1498704 cri.go:89] found id: ""
	I1217 02:08:27.067884 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.067902 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:27.067923 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:27.068020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:27.091698 1498704 cri.go:89] found id: ""
	I1217 02:08:27.091722 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.091737 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:27.091744 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:27.091804 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:27.116923 1498704 cri.go:89] found id: ""
	I1217 02:08:27.116946 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.116954 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:27.116961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:27.117020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:27.142595 1498704 cri.go:89] found id: ""
	I1217 02:08:27.142619 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.142628 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:27.142634 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:27.142693 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:27.167169 1498704 cri.go:89] found id: ""
	I1217 02:08:27.167195 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.167204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:27.167211 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:27.167271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:27.191350 1498704 cri.go:89] found id: ""
	I1217 02:08:27.191376 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.191384 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:27.191391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:27.191451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:27.216388 1498704 cri.go:89] found id: ""
	I1217 02:08:27.216413 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.216422 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:27.216431 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:27.216442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:27.279861 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:27.279884 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:27.279900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:27.304990 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:27.305027 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:27.333926 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:27.333952 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:27.396365 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:27.396403 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:08:28.635158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:30.635316 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:29.913629 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.924284 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:29.924359 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:29.951846 1498704 cri.go:89] found id: ""
	I1217 02:08:29.951873 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.951882 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:29.951888 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:29.951948 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:29.979680 1498704 cri.go:89] found id: ""
	I1217 02:08:29.979709 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.979718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:29.979724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:29.979783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:30.017361 1498704 cri.go:89] found id: ""
	I1217 02:08:30.017494 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.017508 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:30.017517 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:30.017600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:30.055966 1498704 cri.go:89] found id: ""
	I1217 02:08:30.055994 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.056008 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:30.056015 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:30.056153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:30.086268 1498704 cri.go:89] found id: ""
	I1217 02:08:30.086296 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.086305 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:30.086313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:30.086387 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:30.114436 1498704 cri.go:89] found id: ""
	I1217 02:08:30.114474 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.114485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:30.114493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:30.114563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:30.143104 1498704 cri.go:89] found id: ""
	I1217 02:08:30.143130 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.143140 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:30.143148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:30.143215 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:30.178848 1498704 cri.go:89] found id: ""
	I1217 02:08:30.178912 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.178928 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:30.178939 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:30.178950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:30.235226 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:30.235261 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:30.250400 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:30.250427 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:30.316823 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:30.316843 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:30.316855 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:30.341943 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:30.341985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:33.135099 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:35.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:32.880177 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.891005 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:32.891073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:32.918870 1498704 cri.go:89] found id: ""
	I1217 02:08:32.918896 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.918905 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:32.918912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:32.918970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:32.944098 1498704 cri.go:89] found id: ""
	I1217 02:08:32.944123 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.944132 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:32.944137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:32.944197 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:32.968767 1498704 cri.go:89] found id: ""
	I1217 02:08:32.968791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.968801 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:32.968806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:32.968864 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:32.992596 1498704 cri.go:89] found id: ""
	I1217 02:08:32.992624 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.992632 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:32.992638 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:32.992702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:33.018400 1498704 cri.go:89] found id: ""
	I1217 02:08:33.018424 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.018433 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:33.018439 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:33.018497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:33.043622 1498704 cri.go:89] found id: ""
	I1217 02:08:33.043650 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.043660 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:33.043666 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:33.043728 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:33.068595 1498704 cri.go:89] found id: ""
	I1217 02:08:33.068617 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.068627 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:33.068633 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:33.068695 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:33.097084 1498704 cri.go:89] found id: ""
	I1217 02:08:33.097108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.097117 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:33.097126 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:33.097137 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:33.122964 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:33.123001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:33.151132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:33.151159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:33.206768 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:33.206805 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:33.221251 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:33.221330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:33.289516 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:35.789806 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.800262 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:35.800330 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:35.824823 1498704 cri.go:89] found id: ""
	I1217 02:08:35.824844 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.824852 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:35.824859 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:35.824916 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:35.849352 1498704 cri.go:89] found id: ""
	I1217 02:08:35.849379 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.849388 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:35.849395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:35.849455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:35.873025 1498704 cri.go:89] found id: ""
	I1217 02:08:35.873045 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.873054 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:35.873060 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:35.873123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:35.897548 1498704 cri.go:89] found id: ""
	I1217 02:08:35.897572 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.897581 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:35.897586 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:35.897660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:35.927220 1498704 cri.go:89] found id: ""
	I1217 02:08:35.927283 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.927301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:35.927309 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:35.927374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:35.955050 1498704 cri.go:89] found id: ""
	I1217 02:08:35.955075 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.955083 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:35.955089 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:35.955168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:35.979074 1498704 cri.go:89] found id: ""
	I1217 02:08:35.979144 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.979160 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:35.979167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:35.979228 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:36.005502 1498704 cri.go:89] found id: ""
	I1217 02:08:36.005529 1498704 logs.go:282] 0 containers: []
	W1217 02:08:36.005557 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:36.005568 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:36.005582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:36.022508 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:36.022536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:36.088117 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:36.088139 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:36.088152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:36.112883 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:36.112917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:36.142584 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:36.142610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:37.635249 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:40.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:38.698261 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:38.709807 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:38.709880 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:38.734678 1498704 cri.go:89] found id: ""
	I1217 02:08:38.734703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.734712 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:38.734718 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:38.734777 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:38.764118 1498704 cri.go:89] found id: ""
	I1217 02:08:38.764145 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.764154 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:38.764161 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:38.764223 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:38.792269 1498704 cri.go:89] found id: ""
	I1217 02:08:38.792295 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.792305 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:38.792311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:38.792371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:38.817823 1498704 cri.go:89] found id: ""
	I1217 02:08:38.817845 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.817854 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:38.817861 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:38.817921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:38.846444 1498704 cri.go:89] found id: ""
	I1217 02:08:38.846469 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.846478 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:38.846484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:38.846575 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:38.870805 1498704 cri.go:89] found id: ""
	I1217 02:08:38.870830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.870839 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:38.870845 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:38.870909 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:38.902022 1498704 cri.go:89] found id: ""
	I1217 02:08:38.902047 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.902056 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:38.902063 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:38.902127 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:38.925802 1498704 cri.go:89] found id: ""
	I1217 02:08:38.925831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.925851 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:38.925860 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:38.925871 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.991113 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:38.991154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:39.006019 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:39.006049 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:39.074269 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:39.074328 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:39.074342 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:39.099793 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:39.099827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:41.629026 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.643330 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:41.643411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:41.702722 1498704 cri.go:89] found id: ""
	I1217 02:08:41.702743 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.702752 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:41.702758 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:41.702817 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:41.727343 1498704 cri.go:89] found id: ""
	I1217 02:08:41.727368 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.727377 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:41.727383 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:41.727443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:41.752306 1498704 cri.go:89] found id: ""
	I1217 02:08:41.752331 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.752340 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:41.752346 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:41.752409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:41.777003 1498704 cri.go:89] found id: ""
	I1217 02:08:41.777078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.777101 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:41.777121 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:41.777225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:41.801272 1498704 cri.go:89] found id: ""
	I1217 02:08:41.801298 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.801306 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:41.801313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:41.801371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:41.827046 1498704 cri.go:89] found id: ""
	I1217 02:08:41.827070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.827078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:41.827085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:41.827142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:41.855924 1498704 cri.go:89] found id: ""
	I1217 02:08:41.855956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.855965 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:41.855972 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:41.856042 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:41.882797 1498704 cri.go:89] found id: ""
	I1217 02:08:41.882821 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.882830 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:41.882840 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:41.882856 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:41.897281 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:41.897316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:41.963310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:41.963333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:41.963344 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:41.988494 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:41.988529 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:42.019738 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:42.019770 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:42.135661 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:44.635135 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:44.578521 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.589302 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:44.589376 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:44.614651 1498704 cri.go:89] found id: ""
	I1217 02:08:44.614676 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.614685 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:44.614692 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:44.614755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:44.666392 1498704 cri.go:89] found id: ""
	I1217 02:08:44.666414 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.666422 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:44.666429 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:44.666487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:44.722566 1498704 cri.go:89] found id: ""
	I1217 02:08:44.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.722599 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:44.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:44.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:44.747631 1498704 cri.go:89] found id: ""
	I1217 02:08:44.747656 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.747665 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:44.747671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:44.747730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:44.775719 1498704 cri.go:89] found id: ""
	I1217 02:08:44.775756 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.775765 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:44.775773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:44.775846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:44.801032 1498704 cri.go:89] found id: ""
	I1217 02:08:44.801056 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.801066 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:44.801072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:44.801131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:44.827838 1498704 cri.go:89] found id: ""
	I1217 02:08:44.827872 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.827883 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:44.827890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:44.827961 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:44.852948 1498704 cri.go:89] found id: ""
	I1217 02:08:44.852981 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.852990 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:44.853000 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:44.853011 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.908280 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:44.908314 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:44.923445 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:44.923538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:44.992600 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:44.992624 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:44.992637 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:45.027924 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:45.027975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:47.587759 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.598591 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:47.598664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:47.660378 1498704 cri.go:89] found id: ""
	I1217 02:08:47.660400 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.660408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:47.660414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:47.660472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:47.708467 1498704 cri.go:89] found id: ""
	I1217 02:08:47.708489 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.708498 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:47.708504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:47.708563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:47.733161 1498704 cri.go:89] found id: ""
	I1217 02:08:47.733183 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.733191 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:47.733198 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:47.733264 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:47.759190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.759213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.759222 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:47.759228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:47.759285 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:47.787579 1498704 cri.go:89] found id: ""
	I1217 02:08:47.787601 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.787610 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:47.787616 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:47.787697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:47.816190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.816215 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.816224 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:47.816231 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:47.816323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:47.843534 1498704 cri.go:89] found id: ""
	I1217 02:08:47.843562 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.843572 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:47.843578 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:47.843643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1217 02:08:47.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:49.634635 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:47.867806 1498704 cri.go:89] found id: ""
	I1217 02:08:47.867831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.867841 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:47.867852 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:47.867870 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:47.926619 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:47.926658 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:47.941706 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:47.941734 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:48.009461 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:48.009539 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:48.009561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:48.035273 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:48.035311 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.567421 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.578623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:50.578694 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:50.607374 1498704 cri.go:89] found id: ""
	I1217 02:08:50.607396 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.607405 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.607411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:50.607472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:50.666455 1498704 cri.go:89] found id: ""
	I1217 02:08:50.666484 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.666493 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.666499 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:50.666559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:50.717784 1498704 cri.go:89] found id: ""
	I1217 02:08:50.717822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.717831 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.717838 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:50.717941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:50.748500 1498704 cri.go:89] found id: ""
	I1217 02:08:50.748531 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.748543 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.748550 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:50.748618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:50.774642 1498704 cri.go:89] found id: ""
	I1217 02:08:50.774668 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.774677 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.774683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:50.774742 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:50.803738 1498704 cri.go:89] found id: ""
	I1217 02:08:50.803760 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.803769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.803776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:50.803840 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:50.828145 1498704 cri.go:89] found id: ""
	I1217 02:08:50.828212 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.828238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.828256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:50.828335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:50.853950 1498704 cri.go:89] found id: ""
	I1217 02:08:50.853976 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.853985 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.853995 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.854006 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.910278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.910316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.924980 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.925008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.992234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.992257 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:50.992271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:51.018744 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:51.018778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:52.134591 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:54.134633 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:53.547953 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.558518 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:53.558593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:53.583100 1498704 cri.go:89] found id: ""
	I1217 02:08:53.583125 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.583134 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.583141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:53.583202 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:53.607925 1498704 cri.go:89] found id: ""
	I1217 02:08:53.607948 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.607956 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.607962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:53.608023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:53.657081 1498704 cri.go:89] found id: ""
	I1217 02:08:53.657104 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.657127 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.657135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:53.657208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:53.704278 1498704 cri.go:89] found id: ""
	I1217 02:08:53.704305 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.704313 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.704321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:53.704381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:53.730823 1498704 cri.go:89] found id: ""
	I1217 02:08:53.730851 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.730860 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.730868 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:53.730928 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:53.757094 1498704 cri.go:89] found id: ""
	I1217 02:08:53.757116 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.757125 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.757132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:53.757192 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:53.786671 1498704 cri.go:89] found id: ""
	I1217 02:08:53.786696 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.786705 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.786711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:53.786768 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:53.810935 1498704 cri.go:89] found id: ""
	I1217 02:08:53.810957 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.810966 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.810975 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.810986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.866107 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.866140 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:53.881003 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.881037 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.945396 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.945419 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:53.945432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:53.973428 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.973469 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:56.504673 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.515738 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:56.515816 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:56.540741 1498704 cri.go:89] found id: ""
	I1217 02:08:56.540765 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.540773 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.540780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:56.540846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:56.565810 1498704 cri.go:89] found id: ""
	I1217 02:08:56.565831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.565840 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.565846 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:56.565907 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:56.596074 1498704 cri.go:89] found id: ""
	I1217 02:08:56.596096 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.596105 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.596112 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:56.596173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:56.636207 1498704 cri.go:89] found id: ""
	I1217 02:08:56.636229 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.636238 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.636244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:56.636304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:56.698720 1498704 cri.go:89] found id: ""
	I1217 02:08:56.698749 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.698758 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.698765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:56.698838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:56.732897 1498704 cri.go:89] found id: ""
	I1217 02:08:56.732918 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.732926 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.732933 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:56.732999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:56.762677 1498704 cri.go:89] found id: ""
	I1217 02:08:56.762703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.762712 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.762719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:56.762779 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:56.788307 1498704 cri.go:89] found id: ""
	I1217 02:08:56.788333 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.788342 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.788352 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.788364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.844513 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.844548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.858936 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.858968 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.925270 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.925293 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:56.925305 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:56.951928 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.951967 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:56.634544 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:58.634782 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:01.135356 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:59.483487 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.494825 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:59.494899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:59.520751 1498704 cri.go:89] found id: ""
	I1217 02:08:59.520777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.520785 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.520792 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:59.520851 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:59.546097 1498704 cri.go:89] found id: ""
	I1217 02:08:59.546122 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.546131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.546138 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:59.546205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:59.571525 1498704 cri.go:89] found id: ""
	I1217 02:08:59.571548 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.571556 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.571562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:59.571635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:59.595916 1498704 cri.go:89] found id: ""
	I1217 02:08:59.595944 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.595952 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.595959 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:59.596021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:59.677470 1498704 cri.go:89] found id: ""
	I1217 02:08:59.677497 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.677506 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.677512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:59.677577 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:59.708285 1498704 cri.go:89] found id: ""
	I1217 02:08:59.708311 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.708320 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.708328 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:59.708388 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:59.735444 1498704 cri.go:89] found id: ""
	I1217 02:08:59.735466 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.735474 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.735481 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:59.735551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:59.758934 1498704 cri.go:89] found id: ""
	I1217 02:08:59.758956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.758964 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.758974 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.758985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.786487 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.786513 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.843688 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.843719 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.858632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.858661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.922844 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:59.922867 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:59.922888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.448942 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.459473 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:02.459570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:02.487463 1498704 cri.go:89] found id: ""
	I1217 02:09:02.487486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.487494 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.487529 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:02.487591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:02.516013 1498704 cri.go:89] found id: ""
	I1217 02:09:02.516038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.516047 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.516053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:02.516118 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:02.541783 1498704 cri.go:89] found id: ""
	I1217 02:09:02.541806 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.541814 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.541820 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:02.541876 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:02.566427 1498704 cri.go:89] found id: ""
	I1217 02:09:02.566450 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.566459 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.566465 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:02.566561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:02.590894 1498704 cri.go:89] found id: ""
	I1217 02:09:02.590917 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.590926 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.590932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:02.590998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:02.614645 1498704 cri.go:89] found id: ""
	I1217 02:09:02.614668 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.614677 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.614683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:02.614747 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:02.656626 1498704 cri.go:89] found id: ""
	I1217 02:09:02.656662 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.656671 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.656681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:02.656751 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:02.702753 1498704 cri.go:89] found id: ""
	I1217 02:09:02.702787 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.702796 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.702806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.702817 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.772243 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.772266 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:02.772278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.797608 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.797893 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:02.829032 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.829057 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:03.634729 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:06.135608 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:02.886939 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.886975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.401718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.412408 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:05.412488 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:05.441786 1498704 cri.go:89] found id: ""
	I1217 02:09:05.441821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.441830 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.441837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:05.441908 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:05.466385 1498704 cri.go:89] found id: ""
	I1217 02:09:05.466408 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.466416 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.466422 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:05.466481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:05.491033 1498704 cri.go:89] found id: ""
	I1217 02:09:05.491057 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.491066 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.491072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:05.491131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:05.515650 1498704 cri.go:89] found id: ""
	I1217 02:09:05.515675 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.515684 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.515691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:05.515753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:05.539973 1498704 cri.go:89] found id: ""
	I1217 02:09:05.539996 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.540004 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.540016 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:05.540077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:05.565317 1498704 cri.go:89] found id: ""
	I1217 02:09:05.565338 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.565347 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.565353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:05.565414 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:05.590136 1498704 cri.go:89] found id: ""
	I1217 02:09:05.590161 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.590169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.590176 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:05.590240 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:05.614696 1498704 cri.go:89] found id: ""
	I1217 02:09:05.614733 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.614742 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.614752 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.614762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.682980 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.683022 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.700674 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.700704 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.777617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.777635 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:05.777670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:05.803121 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.803155 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:08.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:10.635438 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:08.332434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.343036 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:08.343108 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:08.367411 1498704 cri.go:89] found id: ""
	I1217 02:09:08.367434 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.367443 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.367449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:08.367517 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:08.391668 1498704 cri.go:89] found id: ""
	I1217 02:09:08.391695 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.391704 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.391712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:08.391775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:08.415929 1498704 cri.go:89] found id: ""
	I1217 02:09:08.415953 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.415961 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.415968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:08.416050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:08.441685 1498704 cri.go:89] found id: ""
	I1217 02:09:08.441755 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.441779 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.441798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:08.441888 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:08.466687 1498704 cri.go:89] found id: ""
	I1217 02:09:08.466713 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.466722 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.466728 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:08.466808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:08.491044 1498704 cri.go:89] found id: ""
	I1217 02:09:08.491069 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.491078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.491085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:08.491190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:08.517483 1498704 cri.go:89] found id: ""
	I1217 02:09:08.517508 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.517517 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.517524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:08.517593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:08.543991 1498704 cri.go:89] found id: ""
	I1217 02:09:08.544017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.544026 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.544035 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.544053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.608510 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.608567 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.642989 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.643026 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.751212 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.751241 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:08.751254 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:08.779142 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.779180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.312760 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.327627 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:11.327714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:11.352557 1498704 cri.go:89] found id: ""
	I1217 02:09:11.352580 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.352588 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.352595 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:11.352654 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:11.378891 1498704 cri.go:89] found id: ""
	I1217 02:09:11.378913 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.378922 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.378928 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:11.378987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:11.403393 1498704 cri.go:89] found id: ""
	I1217 02:09:11.403416 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.403424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.403430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:11.403489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:11.432435 1498704 cri.go:89] found id: ""
	I1217 02:09:11.432459 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.432472 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.432479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:11.432565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:11.458410 1498704 cri.go:89] found id: ""
	I1217 02:09:11.458436 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.458445 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.458451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:11.458510 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:11.484113 1498704 cri.go:89] found id: ""
	I1217 02:09:11.484140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.484149 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.484156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:11.484216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:11.511088 1498704 cri.go:89] found id: ""
	I1217 02:09:11.511112 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.511121 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.511128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:11.511191 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:11.540295 1498704 cri.go:89] found id: ""
	I1217 02:09:11.540324 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.540333 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.540342 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.540354 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.554828 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.554857 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:11.615811 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:11.615835 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:11.615849 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:11.643999 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.644035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.696705 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.696733 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:13.134531 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:14.634797 1494358 node_ready.go:38] duration metric: took 6m0.000749408s for node "no-preload-178365" to be "Ready" ...
	I1217 02:09:14.638073 1494358 out.go:203] 
	W1217 02:09:14.640977 1494358 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:09:14.641013 1494358 out.go:285] * 
	W1217 02:09:14.643229 1494358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:09:14.646121 1494358 out.go:203] 
	I1217 02:09:14.265939 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.276062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:14.276129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:14.301710 1498704 cri.go:89] found id: ""
	I1217 02:09:14.301736 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.301744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.301753 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:14.301811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:14.327085 1498704 cri.go:89] found id: ""
	I1217 02:09:14.327111 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.327119 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.327125 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:14.327182 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:14.351112 1498704 cri.go:89] found id: ""
	I1217 02:09:14.351134 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.351142 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.351148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:14.351208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:14.379796 1498704 cri.go:89] found id: ""
	I1217 02:09:14.379823 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.379833 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.379840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:14.379902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:14.404135 1498704 cri.go:89] found id: ""
	I1217 02:09:14.404158 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.404167 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.404172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:14.404234 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:14.428171 1498704 cri.go:89] found id: ""
	I1217 02:09:14.428194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.428204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.428212 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:14.428272 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:14.455193 1498704 cri.go:89] found id: ""
	I1217 02:09:14.455217 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.455225 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.455232 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:14.455292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:14.479959 1498704 cri.go:89] found id: ""
	I1217 02:09:14.479985 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.479994 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.480003 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.480014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.537013 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.537048 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:14.551864 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:14.551888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:14.616449 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:14.616522 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:14.616551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:14.646206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:14.646248 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.269774 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.280406 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:17.280478 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:17.305501 1498704 cri.go:89] found id: ""
	I1217 02:09:17.305529 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.305537 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.305544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:17.305601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:17.330336 1498704 cri.go:89] found id: ""
	I1217 02:09:17.330361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.330370 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.330377 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:17.330436 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:17.355210 1498704 cri.go:89] found id: ""
	I1217 02:09:17.355235 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.355250 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.355256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:17.355315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:17.380868 1498704 cri.go:89] found id: ""
	I1217 02:09:17.380893 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.380901 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.380908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:17.380968 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:17.406748 1498704 cri.go:89] found id: ""
	I1217 02:09:17.406771 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.406779 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.406785 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:17.406844 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:17.431237 1498704 cri.go:89] found id: ""
	I1217 02:09:17.431263 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.431272 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.431279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:17.431337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:17.455474 1498704 cri.go:89] found id: ""
	I1217 02:09:17.455500 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.455516 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.455523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:17.455586 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:17.479040 1498704 cri.go:89] found id: ""
	I1217 02:09:17.479062 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.479070 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.479079 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:17.479092 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.511305 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.511333 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:17.567635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:17.567672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:17.583863 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:17.583892 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:17.655165 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:17.655185 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:17.655198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.181833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.192614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:20.192732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:20.219176 1498704 cri.go:89] found id: ""
	I1217 02:09:20.219199 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.219208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.219215 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:20.219275 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:20.248198 1498704 cri.go:89] found id: ""
	I1217 02:09:20.248224 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.248233 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.248239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:20.248299 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:20.273332 1498704 cri.go:89] found id: ""
	I1217 02:09:20.273355 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.273363 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.273370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:20.273429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:20.299548 1498704 cri.go:89] found id: ""
	I1217 02:09:20.299621 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.299655 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.299668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:20.299741 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:20.328882 1498704 cri.go:89] found id: ""
	I1217 02:09:20.328911 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.328919 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.328925 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:20.328987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:20.354861 1498704 cri.go:89] found id: ""
	I1217 02:09:20.354887 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.354898 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.354904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:20.354999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:20.380708 1498704 cri.go:89] found id: ""
	I1217 02:09:20.380744 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.380754 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:20.380761 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:20.380833 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:20.410724 1498704 cri.go:89] found id: ""
	I1217 02:09:20.410749 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.410758 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:20.410767 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:20.410778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:20.470014 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:20.470053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:20.484955 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:20.484989 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:20.548617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:20.548637 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:20.548649 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.573994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:20.574030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:23.106211 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.116663 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:23.116732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:23.144995 1498704 cri.go:89] found id: ""
	I1217 02:09:23.145017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.145025 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.145031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:23.145089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:23.172623 1498704 cri.go:89] found id: ""
	I1217 02:09:23.172651 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.172660 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.172668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:23.172727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:23.201388 1498704 cri.go:89] found id: ""
	I1217 02:09:23.201415 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.201424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.201437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:23.201500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:23.225335 1498704 cri.go:89] found id: ""
	I1217 02:09:23.225361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.225370 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.225376 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:23.225433 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:23.251629 1498704 cri.go:89] found id: ""
	I1217 02:09:23.251654 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.251662 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:23.251668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:23.251733 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:23.279092 1498704 cri.go:89] found id: ""
	I1217 02:09:23.279120 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.279129 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:23.279136 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:23.279199 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:23.303104 1498704 cri.go:89] found id: ""
	I1217 02:09:23.303126 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.303134 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:23.303140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:23.303204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:23.327448 1498704 cri.go:89] found id: ""
	I1217 02:09:23.327479 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.327488 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:23.327497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:23.327544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:23.394139 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:23.394186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:23.409933 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:23.409961 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:23.478459 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:23.478484 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:23.478498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:23.503474 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:23.503515 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:26.036615 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.047567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:26.047682 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:26.072876 1498704 cri.go:89] found id: ""
	I1217 02:09:26.072903 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.072912 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.072919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:26.072981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:26.100352 1498704 cri.go:89] found id: ""
	I1217 02:09:26.100378 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.100387 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:26.100392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:26.100450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:26.135848 1498704 cri.go:89] found id: ""
	I1217 02:09:26.135875 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.135884 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:26.135890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:26.135950 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:26.168993 1498704 cri.go:89] found id: ""
	I1217 02:09:26.169020 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.169028 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:26.169035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:26.169094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:26.210553 1498704 cri.go:89] found id: ""
	I1217 02:09:26.210581 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.210590 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:26.210597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:26.210659 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:26.236497 1498704 cri.go:89] found id: ""
	I1217 02:09:26.236526 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.236534 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:26.236541 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:26.236600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:26.261964 1498704 cri.go:89] found id: ""
	I1217 02:09:26.261989 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.261997 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:26.262004 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:26.262090 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:26.288105 1498704 cri.go:89] found id: ""
	I1217 02:09:26.288138 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.288148 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:26.288157 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:26.288168 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:26.343617 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:26.343650 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:26.358285 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:26.358312 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:26.424304 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:26.424327 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:26.424340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:26.450148 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:26.450185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:28.978571 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:28.990745 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:28.990835 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:29.015938 1498704 cri.go:89] found id: ""
	I1217 02:09:29.015962 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.015971 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:29.015977 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:29.016035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:29.041116 1498704 cri.go:89] found id: ""
	I1217 02:09:29.041141 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.041149 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:29.041156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:29.041217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:29.066014 1498704 cri.go:89] found id: ""
	I1217 02:09:29.066036 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.066044 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:29.066051 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:29.066107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:29.090514 1498704 cri.go:89] found id: ""
	I1217 02:09:29.090539 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.090548 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:29.090554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:29.090640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:29.114384 1498704 cri.go:89] found id: ""
	I1217 02:09:29.114405 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.114414 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:29.114420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:29.114506 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:29.143954 1498704 cri.go:89] found id: ""
	I1217 02:09:29.143977 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.143987 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:29.143995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:29.144081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:29.185816 1498704 cri.go:89] found id: ""
	I1217 02:09:29.185839 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.185847 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:29.185864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:29.185941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:29.214738 1498704 cri.go:89] found id: ""
	I1217 02:09:29.214761 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.214770 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:29.214780 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:29.214807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:29.244598 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:29.244623 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:29.300237 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:29.300271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.314809 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:29.314874 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:29.380612 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:29.380633 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:29.380645 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:31.905779 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:31.917874 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:31.917963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:31.946726 1498704 cri.go:89] found id: ""
	I1217 02:09:31.946750 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.946759 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:31.946766 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:31.946829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:31.971653 1498704 cri.go:89] found id: ""
	I1217 02:09:31.971677 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.971685 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:31.971691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:31.971753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:31.999116 1498704 cri.go:89] found id: ""
	I1217 02:09:31.999139 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.999147 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:31.999160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:31.999224 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:32.028438 1498704 cri.go:89] found id: ""
	I1217 02:09:32.028461 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.028470 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:32.028476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:32.028535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:32.053600 1498704 cri.go:89] found id: ""
	I1217 02:09:32.053623 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.053632 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:32.053639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:32.053734 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:32.080000 1498704 cri.go:89] found id: ""
	I1217 02:09:32.080023 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.080032 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:32.080038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:32.080100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:32.105557 1498704 cri.go:89] found id: ""
	I1217 02:09:32.105632 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.105700 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:32.105721 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:32.105814 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:32.142478 1498704 cri.go:89] found id: ""
	I1217 02:09:32.142506 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.142515 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:32.142524 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:32.142536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:32.158591 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:32.158625 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:32.222822 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:32.222896 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:32.222917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:32.248192 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:32.248226 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:32.275127 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:32.275152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:34.830607 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:34.841178 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:34.841251 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:34.866230 1498704 cri.go:89] found id: ""
	I1217 02:09:34.866254 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.866263 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:34.866270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:34.866347 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:34.895167 1498704 cri.go:89] found id: ""
	I1217 02:09:34.895234 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.895251 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:34.895258 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:34.895317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:34.924481 1498704 cri.go:89] found id: ""
	I1217 02:09:34.924521 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.924530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:34.924537 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:34.924608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:34.953744 1498704 cri.go:89] found id: ""
	I1217 02:09:34.953814 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.953830 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:34.953837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:34.953910 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:34.978668 1498704 cri.go:89] found id: ""
	I1217 02:09:34.978735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.978755 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:34.978763 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:34.978823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:35.010506 1498704 cri.go:89] found id: ""
	I1217 02:09:35.010545 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.010554 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:35.010562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:35.010649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:35.037564 1498704 cri.go:89] found id: ""
	I1217 02:09:35.037591 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.037601 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:35.037607 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:35.037720 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:35.063033 1498704 cri.go:89] found id: ""
	I1217 02:09:35.063072 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.063093 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:35.063107 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:35.063123 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:35.119982 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:35.120059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:35.136426 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:35.136504 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:35.210581 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:35.210605 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:35.210617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:35.235901 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:35.235932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:37.769826 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:37.780267 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:37.780361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:37.804770 1498704 cri.go:89] found id: ""
	I1217 02:09:37.804835 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.804858 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:37.804876 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:37.804947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:37.828942 1498704 cri.go:89] found id: ""
	I1217 02:09:37.828981 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.829006 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:37.829019 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:37.829098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:37.856624 1498704 cri.go:89] found id: ""
	I1217 02:09:37.856689 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.856714 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:37.856733 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:37.856808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:37.895741 1498704 cri.go:89] found id: ""
	I1217 02:09:37.895779 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.895789 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:37.895796 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:37.895870 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:37.928762 1498704 cri.go:89] found id: ""
	I1217 02:09:37.928795 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.928804 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:37.928811 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:37.928889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:37.964505 1498704 cri.go:89] found id: ""
	I1217 02:09:37.964530 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.964540 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:37.964557 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:37.964622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:37.990281 1498704 cri.go:89] found id: ""
	I1217 02:09:37.990306 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.990315 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:37.990321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:37.990409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:38.022757 1498704 cri.go:89] found id: ""
	I1217 02:09:38.022789 1498704 logs.go:282] 0 containers: []
	W1217 02:09:38.022799 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:38.022819 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:38.022839 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:38.082781 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:38.082818 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:38.098274 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:38.098303 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:38.181369 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:38.181394 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:38.181408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:38.211421 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:38.211459 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:40.744187 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:40.755584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:40.755657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:40.784265 1498704 cri.go:89] found id: ""
	I1217 02:09:40.784290 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.784299 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:40.784305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:40.784366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:40.812965 1498704 cri.go:89] found id: ""
	I1217 02:09:40.813034 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.813059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:40.813077 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:40.813170 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:40.838108 1498704 cri.go:89] found id: ""
	I1217 02:09:40.838135 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.838144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:40.838150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:40.838218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:40.863761 1498704 cri.go:89] found id: ""
	I1217 02:09:40.863797 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.863806 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:40.863814 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:40.863883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:40.896946 1498704 cri.go:89] found id: ""
	I1217 02:09:40.896973 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.896982 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:40.896990 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:40.897049 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:40.927040 1498704 cri.go:89] found id: ""
	I1217 02:09:40.927067 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.927076 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:40.927083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:40.927142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:40.953843 1498704 cri.go:89] found id: ""
	I1217 02:09:40.953869 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.953878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:40.953885 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:40.953947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:40.983898 1498704 cri.go:89] found id: ""
	I1217 02:09:40.983921 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.983929 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:40.983938 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:40.983950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:41.041172 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:41.041208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:41.056418 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:41.056454 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:41.119760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:41.119832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:41.119859 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:41.148272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:41.148479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.682654 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:43.694991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:43.695064 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:43.722566 1498704 cri.go:89] found id: ""
	I1217 02:09:43.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.722599 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:43.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:43.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:43.747132 1498704 cri.go:89] found id: ""
	I1217 02:09:43.747157 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.747165 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:43.747177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:43.747238 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:43.773465 1498704 cri.go:89] found id: ""
	I1217 02:09:43.773486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.773494 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:43.773500 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:43.773559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:43.798692 1498704 cri.go:89] found id: ""
	I1217 02:09:43.798716 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.798725 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:43.798731 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:43.798796 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:43.825731 1498704 cri.go:89] found id: ""
	I1217 02:09:43.825753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.825762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:43.825768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:43.825827 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:43.855796 1498704 cri.go:89] found id: ""
	I1217 02:09:43.855821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.855829 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:43.855836 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:43.855902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:43.886935 1498704 cri.go:89] found id: ""
	I1217 02:09:43.886960 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.886969 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:43.886975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:43.887035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:43.917934 1498704 cri.go:89] found id: ""
	I1217 02:09:43.917961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.917970 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:43.917979 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:43.917997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.947632 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:43.947659 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:44.003825 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:44.003866 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:44.019941 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:44.019972 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:44.089358 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:44.089380 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:44.089394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.615402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:46.625887 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:46.625979 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:46.650868 1498704 cri.go:89] found id: ""
	I1217 02:09:46.650891 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.650899 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:46.650906 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:46.650966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:46.675004 1498704 cri.go:89] found id: ""
	I1217 02:09:46.675025 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.675033 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:46.675039 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:46.675098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:46.698859 1498704 cri.go:89] found id: ""
	I1217 02:09:46.698880 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.698888 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:46.698899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:46.698966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:46.722103 1498704 cri.go:89] found id: ""
	I1217 02:09:46.722130 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.722139 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:46.722146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:46.722205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:46.749559 1498704 cri.go:89] found id: ""
	I1217 02:09:46.749582 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.749591 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:46.749598 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:46.749681 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:46.775252 1498704 cri.go:89] found id: ""
	I1217 02:09:46.775274 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.775282 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:46.775289 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:46.775368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:46.799706 1498704 cri.go:89] found id: ""
	I1217 02:09:46.799738 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.799747 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:46.799754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:46.799815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:46.825525 1498704 cri.go:89] found id: ""
	I1217 02:09:46.825552 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.825562 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:46.825596 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:46.825616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:46.898518 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:46.898546 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:46.898559 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.924328 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:46.924360 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:46.953287 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:46.953315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:47.008776 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:47.008811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.524226 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:49.535609 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:49.535691 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:49.563709 1498704 cri.go:89] found id: ""
	I1217 02:09:49.563735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.563744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:49.563751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:49.563829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:49.589205 1498704 cri.go:89] found id: ""
	I1217 02:09:49.589229 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.589238 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:49.589245 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:49.589305 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:49.615016 1498704 cri.go:89] found id: ""
	I1217 02:09:49.615038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.615046 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:49.615053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:49.615110 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:49.639299 1498704 cri.go:89] found id: ""
	I1217 02:09:49.639377 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.639407 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:49.639416 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:49.639514 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:49.664056 1498704 cri.go:89] found id: ""
	I1217 02:09:49.664079 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.664087 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:49.664093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:49.664151 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:49.688630 1498704 cri.go:89] found id: ""
	I1217 02:09:49.688652 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.688661 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:49.688667 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:49.688724 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:49.712428 1498704 cri.go:89] found id: ""
	I1217 02:09:49.712447 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.712461 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:49.712467 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:49.712525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:49.736311 1498704 cri.go:89] found id: ""
	I1217 02:09:49.736388 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.736412 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:49.736433 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:49.736473 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:49.792224 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:49.792264 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.806602 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:49.806639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.873760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.873781 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:49.873793 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:49.901849 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:49.901881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.452856 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:52.463628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:52.463707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:52.487769 1498704 cri.go:89] found id: ""
	I1217 02:09:52.487794 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.487802 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:52.487809 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:52.487901 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:52.515989 1498704 cri.go:89] found id: ""
	I1217 02:09:52.516013 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.516022 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:52.516028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:52.516136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:52.542514 1498704 cri.go:89] found id: ""
	I1217 02:09:52.542538 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.542547 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:52.542554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:52.542622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:52.567016 1498704 cri.go:89] found id: ""
	I1217 02:09:52.567050 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.567059 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:52.567067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:52.567129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:52.591935 1498704 cri.go:89] found id: ""
	I1217 02:09:52.591961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.591969 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:52.591975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:52.592035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:52.617548 1498704 cri.go:89] found id: ""
	I1217 02:09:52.617573 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.617583 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:52.617589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:52.617668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:52.642857 1498704 cri.go:89] found id: ""
	I1217 02:09:52.642881 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.642889 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:52.642895 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:52.642952 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:52.666997 1498704 cri.go:89] found id: ""
	I1217 02:09:52.667022 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.667031 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:52.667042 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:52.667055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.736175 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.736198 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:52.736210 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:52.761310 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.761340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.789730 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:52.789758 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:52.846428 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:52.846464 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.363216 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:55.378169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:55.378242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:55.405237 1498704 cri.go:89] found id: ""
	I1217 02:09:55.405262 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.405271 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:55.405277 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:55.405341 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:55.431829 1498704 cri.go:89] found id: ""
	I1217 02:09:55.431852 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.431860 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:55.431866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:55.431924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:55.464126 1498704 cri.go:89] found id: ""
	I1217 02:09:55.464149 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.464157 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:55.464163 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:55.464221 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:55.489098 1498704 cri.go:89] found id: ""
	I1217 02:09:55.489140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.489174 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:55.489188 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:55.489291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:55.514718 1498704 cri.go:89] found id: ""
	I1217 02:09:55.514753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.514762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:55.514768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:55.514828 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:55.538941 1498704 cri.go:89] found id: ""
	I1217 02:09:55.538964 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.538972 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:55.538979 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:55.539040 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:55.564206 1498704 cri.go:89] found id: ""
	I1217 02:09:55.564233 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.564242 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:55.564248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:55.564307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:55.588698 1498704 cri.go:89] found id: ""
	I1217 02:09:55.588722 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.588731 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:55.588740 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:55.588751 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.643314 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.643346 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.657901 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.657933 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.728753 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.728775 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:55.728788 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:55.754781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.754822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:58.282279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:58.292524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:58.292594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:58.320120 1498704 cri.go:89] found id: ""
	I1217 02:09:58.320144 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.320153 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:58.320160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:58.320219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:58.344609 1498704 cri.go:89] found id: ""
	I1217 02:09:58.344634 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.344643 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:58.344649 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:58.344714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:58.371166 1498704 cri.go:89] found id: ""
	I1217 02:09:58.371194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.371203 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:58.371209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:58.371267 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:58.399919 1498704 cri.go:89] found id: ""
	I1217 02:09:58.399947 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.399955 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:58.399961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:58.400029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:58.426746 1498704 cri.go:89] found id: ""
	I1217 02:09:58.426774 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.426783 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:58.426789 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:58.426849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:58.452086 1498704 cri.go:89] found id: ""
	I1217 02:09:58.452164 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.452187 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:58.452202 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:58.452313 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:58.479597 1498704 cri.go:89] found id: ""
	I1217 02:09:58.479640 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.479650 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:58.479657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:58.479735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:58.507631 1498704 cri.go:89] found id: ""
	I1217 02:09:58.507660 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.507668 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.507677 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.507688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.563330 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.563364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.577956 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.577986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.640599 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.640618 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:58.640631 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:58.665542 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.665579 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.193230 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:01.205093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:01.205168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:01.231574 1498704 cri.go:89] found id: ""
	I1217 02:10:01.231657 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.231671 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:01.231679 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:01.231755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:01.258626 1498704 cri.go:89] found id: ""
	I1217 02:10:01.258656 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.258665 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:01.258671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:01.258731 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:01.285028 1498704 cri.go:89] found id: ""
	I1217 02:10:01.285107 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.285130 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:01.285150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:01.285236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:01.311238 1498704 cri.go:89] found id: ""
	I1217 02:10:01.311260 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.311270 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:01.311276 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:01.311337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:01.335915 1498704 cri.go:89] found id: ""
	I1217 02:10:01.335938 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.335946 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.335953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:01.336013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:01.362270 1498704 cri.go:89] found id: ""
	I1217 02:10:01.362299 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.362310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.362317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:01.362386 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:01.389194 1498704 cri.go:89] found id: ""
	I1217 02:10:01.389272 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.389296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.389315 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:01.389404 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:01.425060 1498704 cri.go:89] found id: ""
	I1217 02:10:01.425133 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.425156 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.425178 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.425214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.484970 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.485005 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.500061 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.500089 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.568584 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.568606 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:01.568618 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:01.594966 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.595000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.124707 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:04.138794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:04.138889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:04.192615 1498704 cri.go:89] found id: ""
	I1217 02:10:04.192646 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.192657 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:04.192664 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:04.192738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:04.223099 1498704 cri.go:89] found id: ""
	I1217 02:10:04.223126 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.223135 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.223142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:04.223204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:04.251428 1498704 cri.go:89] found id: ""
	I1217 02:10:04.251451 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.251460 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.251466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:04.251549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:04.277739 1498704 cri.go:89] found id: ""
	I1217 02:10:04.277767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.277778 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.277786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:04.277849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:04.302600 1498704 cri.go:89] found id: ""
	I1217 02:10:04.302625 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.302633 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.302639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:04.302702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:04.328192 1498704 cri.go:89] found id: ""
	I1217 02:10:04.328221 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.328230 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.328237 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:04.328307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:04.354026 1498704 cri.go:89] found id: ""
	I1217 02:10:04.354049 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.354058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.354064 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:04.354125 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:04.387067 1498704 cri.go:89] found id: ""
	I1217 02:10:04.387101 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.387111 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.387140 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:04.387159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:04.420944 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.420981 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.453477 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.453511 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.509779 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.509814 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.525121 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.525151 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.596992 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.097279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.107872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:07.107951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:07.140845 1498704 cri.go:89] found id: ""
	I1217 02:10:07.140873 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.140883 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.140889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:07.140949 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:07.171271 1498704 cri.go:89] found id: ""
	I1217 02:10:07.171293 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.171301 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.171307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:07.171368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:07.199048 1498704 cri.go:89] found id: ""
	I1217 02:10:07.199075 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.199085 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.199092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:07.199152 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:07.223715 1498704 cri.go:89] found id: ""
	I1217 02:10:07.223755 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.223765 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.223771 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:07.223838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:07.250683 1498704 cri.go:89] found id: ""
	I1217 02:10:07.250708 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.250718 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.250724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:07.250783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:07.274541 1498704 cri.go:89] found id: ""
	I1217 02:10:07.274614 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.274627 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.274661 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:07.274752 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:07.298768 1498704 cri.go:89] found id: ""
	I1217 02:10:07.298833 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.298859 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.298872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:07.298944 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:07.322447 1498704 cri.go:89] found id: ""
	I1217 02:10:07.322510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.322534 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.322549 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.322561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.392049 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.392072 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:07.392086 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:07.419785 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.419819 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.448497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.448525 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.505149 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.505186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.022238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.034403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:10.034482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:10.061856 1498704 cri.go:89] found id: ""
	I1217 02:10:10.061882 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.061891 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.061897 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:10.061976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:10.089092 1498704 cri.go:89] found id: ""
	I1217 02:10:10.089118 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.089128 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.089141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:10.089217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:10.115444 1498704 cri.go:89] found id: ""
	I1217 02:10:10.115467 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.115476 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.115482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:10.115579 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:10.142860 1498704 cri.go:89] found id: ""
	I1217 02:10:10.142889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.142897 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.142904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:10.142975 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:10.171034 1498704 cri.go:89] found id: ""
	I1217 02:10:10.171061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.171070 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.171076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:10.171135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:10.201087 1498704 cri.go:89] found id: ""
	I1217 02:10:10.201121 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.201130 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.201137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:10.201206 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:10.227252 1498704 cri.go:89] found id: ""
	I1217 02:10:10.227316 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.227340 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.227353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:10.227429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:10.256814 1498704 cri.go:89] found id: ""
	I1217 02:10:10.256850 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.256859 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.256885 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.256905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.316432 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.316484 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.331782 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.331807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.418862 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.418886 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:10.418898 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:10.447108 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.447142 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:12.978148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.988751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:12.988821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:13.014409 1498704 cri.go:89] found id: ""
	I1217 02:10:13.014435 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.014445 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.014452 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:13.014516 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:13.039697 1498704 cri.go:89] found id: ""
	I1217 02:10:13.039725 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.039734 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.039741 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:13.039830 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:13.063238 1498704 cri.go:89] found id: ""
	I1217 02:10:13.063263 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.063272 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.063279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:13.063337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:13.087932 1498704 cri.go:89] found id: ""
	I1217 02:10:13.087955 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.087964 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.087970 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:13.088029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:13.116779 1498704 cri.go:89] found id: ""
	I1217 02:10:13.116824 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.116833 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.116840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:13.116924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:13.152355 1498704 cri.go:89] found id: ""
	I1217 02:10:13.152379 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.152388 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.152395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:13.152462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:13.178465 1498704 cri.go:89] found id: ""
	I1217 02:10:13.178498 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.178507 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.178513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:13.178597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:13.204065 1498704 cri.go:89] found id: ""
	I1217 02:10:13.204090 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.204099 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.204109 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.204119 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:13.260597 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.260643 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.275806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.275834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.339094 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.339116 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:13.339128 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:13.364711 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.364742 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:15.901294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:15.915207 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:15.915287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:15.944035 1498704 cri.go:89] found id: ""
	I1217 02:10:15.944062 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.944071 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:15.944078 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:15.944142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:15.969105 1498704 cri.go:89] found id: ""
	I1217 02:10:15.969132 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.969142 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:15.969148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:15.969213 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:15.994468 1498704 cri.go:89] found id: ""
	I1217 02:10:15.994495 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.994505 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:15.994511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:15.994576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:16.021869 1498704 cri.go:89] found id: ""
	I1217 02:10:16.021897 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.021907 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.021914 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:16.021981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:16.050208 1498704 cri.go:89] found id: ""
	I1217 02:10:16.050236 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.050245 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.050252 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:16.050319 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:16.076004 1498704 cri.go:89] found id: ""
	I1217 02:10:16.076031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.076041 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.076048 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:16.076159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:16.102446 1498704 cri.go:89] found id: ""
	I1217 02:10:16.102526 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.102550 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.102563 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:16.102643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:16.134280 1498704 cri.go:89] found id: ""
	I1217 02:10:16.134306 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.134315 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.134325 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.134362 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:16.173187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.173220 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.231927 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.231960 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.247063 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.247093 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.315647 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.315668 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:16.315681 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:18.841379 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:18.852146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:18.852219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:18.877675 1498704 cri.go:89] found id: ""
	I1217 02:10:18.877750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.877765 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:18.877773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:18.877839 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:18.903447 1498704 cri.go:89] found id: ""
	I1217 02:10:18.903482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.903491 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:18.903498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:18.903576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:18.929561 1498704 cri.go:89] found id: ""
	I1217 02:10:18.929588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.929597 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:18.929604 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:18.929683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:18.955239 1498704 cri.go:89] found id: ""
	I1217 02:10:18.955333 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.955350 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:18.955358 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:18.955424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:18.979922 1498704 cri.go:89] found id: ""
	I1217 02:10:18.979953 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.979962 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:18.979968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:18.980035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:19.007041 1498704 cri.go:89] found id: ""
	I1217 02:10:19.007077 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.007087 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.007093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:19.007177 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:19.035426 1498704 cri.go:89] found id: ""
	I1217 02:10:19.035450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.035459 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.035466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:19.035542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:19.060135 1498704 cri.go:89] found id: ""
	I1217 02:10:19.060159 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.060167 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.060200 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.060217 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.116693 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.116728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.134579 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.134610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.216066 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.216089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:19.216105 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:19.242169 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.242202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:21.771406 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:21.782951 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:21.783026 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:21.809728 1498704 cri.go:89] found id: ""
	I1217 02:10:21.809750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.809758 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:21.809765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:21.809824 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:21.841207 1498704 cri.go:89] found id: ""
	I1217 02:10:21.841233 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.841242 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:21.841248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:21.841307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:21.868982 1498704 cri.go:89] found id: ""
	I1217 02:10:21.869008 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.869017 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:21.869023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:21.869102 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:21.895994 1498704 cri.go:89] found id: ""
	I1217 02:10:21.896030 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.896040 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:21.896046 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:21.896117 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:21.927675 1498704 cri.go:89] found id: ""
	I1217 02:10:21.927767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.927786 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:21.927798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:21.927886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:21.956133 1498704 cri.go:89] found id: ""
	I1217 02:10:21.956157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.956166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:21.956172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:21.956235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:21.987411 1498704 cri.go:89] found id: ""
	I1217 02:10:21.987442 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.987451 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:21.987458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:21.987528 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:22.018001 1498704 cri.go:89] found id: ""
	I1217 02:10:22.018031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:22.018041 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.018058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.018072 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.077509 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.077544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:22.094048 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.094152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.179483 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.179527 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:22.179552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:22.208002 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.208053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.745839 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:24.756980 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:24.757073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:24.781924 1498704 cri.go:89] found id: ""
	I1217 02:10:24.781947 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.781955 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:24.781962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:24.782022 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:24.807686 1498704 cri.go:89] found id: ""
	I1217 02:10:24.807709 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.807718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:24.807725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:24.807785 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:24.833146 1498704 cri.go:89] found id: ""
	I1217 02:10:24.833177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.833197 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:24.833204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:24.833268 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:24.859474 1498704 cri.go:89] found id: ""
	I1217 02:10:24.859496 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.859505 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:24.859523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:24.859585 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:24.885498 1498704 cri.go:89] found id: ""
	I1217 02:10:24.885523 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.885532 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:24.885549 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:24.885608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:24.910357 1498704 cri.go:89] found id: ""
	I1217 02:10:24.910394 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.910403 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:24.910410 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:24.910487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:24.935548 1498704 cri.go:89] found id: ""
	I1217 02:10:24.935572 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.935581 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:24.935588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:24.935650 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:24.961748 1498704 cri.go:89] found id: ""
	I1217 02:10:24.961774 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.961813 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:24.961831 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:24.961852 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.989413 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:24.989488 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.046752 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.046797 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.074232 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.074268 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.166951 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.166980 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:25.166994 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.699737 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:27.710317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:27.710401 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:27.735667 1498704 cri.go:89] found id: ""
	I1217 02:10:27.735694 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.735703 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:27.735709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:27.735770 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:27.764035 1498704 cri.go:89] found id: ""
	I1217 02:10:27.764061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.764070 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:27.764076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:27.764136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:27.788237 1498704 cri.go:89] found id: ""
	I1217 02:10:27.788265 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.788273 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:27.788280 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:27.788340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:27.815686 1498704 cri.go:89] found id: ""
	I1217 02:10:27.815714 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.815723 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:27.815730 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:27.815792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:27.846482 1498704 cri.go:89] found id: ""
	I1217 02:10:27.846510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.846518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:27.846525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:27.846584 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:27.871189 1498704 cri.go:89] found id: ""
	I1217 02:10:27.871217 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.871227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:27.871233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:27.871292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:27.899034 1498704 cri.go:89] found id: ""
	I1217 02:10:27.899056 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.899064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:27.899070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:27.899128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:27.923014 1498704 cri.go:89] found id: ""
	I1217 02:10:27.923037 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.923046 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:27.923055 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:27.923066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.948254 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:27.948289 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:27.978557 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:27.978582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.033709 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.033748 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:28.049287 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:28.049315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:28.120598 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.621228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:30.633415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:30.633544 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:30.660114 1498704 cri.go:89] found id: ""
	I1217 02:10:30.660186 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.660208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:30.660228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:30.660315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:30.687423 1498704 cri.go:89] found id: ""
	I1217 02:10:30.687450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.687459 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:30.687466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:30.687542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:30.712536 1498704 cri.go:89] found id: ""
	I1217 02:10:30.712568 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.712577 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:30.712584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:30.712658 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:30.736913 1498704 cri.go:89] found id: ""
	I1217 02:10:30.736983 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.737007 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:30.737025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:30.737115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:30.761778 1498704 cri.go:89] found id: ""
	I1217 02:10:30.761852 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.761875 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:30.761889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:30.761963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:30.789829 1498704 cri.go:89] found id: ""
	I1217 02:10:30.789854 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.789863 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:30.789869 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:30.789930 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:30.815268 1498704 cri.go:89] found id: ""
	I1217 02:10:30.815296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.815304 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:30.815311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:30.815373 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:30.839769 1498704 cri.go:89] found id: ""
	I1217 02:10:30.839793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.839802 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:30.839811 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:30.839823 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:30.854187 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:30.854216 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:30.917680 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.917706 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:30.917718 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:30.943267 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:30.943300 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:30.970294 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:30.970374 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.525981 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:33.536356 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:33.536427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:33.561187 1498704 cri.go:89] found id: ""
	I1217 02:10:33.561210 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.561219 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:33.561225 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:33.561287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:33.589979 1498704 cri.go:89] found id: ""
	I1217 02:10:33.590002 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.590012 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:33.590023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:33.590082 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:33.615543 1498704 cri.go:89] found id: ""
	I1217 02:10:33.615567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.615576 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:33.615583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:33.615644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:33.648052 1498704 cri.go:89] found id: ""
	I1217 02:10:33.648080 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.648089 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:33.648095 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:33.648162 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:33.676343 1498704 cri.go:89] found id: ""
	I1217 02:10:33.676376 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.676386 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:33.676392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:33.676459 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:33.707262 1498704 cri.go:89] found id: ""
	I1217 02:10:33.707338 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.707353 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:33.707359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:33.707419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:33.732853 1498704 cri.go:89] found id: ""
	I1217 02:10:33.732920 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.732945 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:33.732963 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:33.733053 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:33.757542 1498704 cri.go:89] found id: ""
	I1217 02:10:33.757567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.757576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:33.757585 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:33.757596 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:33.821758 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:33.821777 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:33.821791 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:33.846519 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:33.846555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:33.873755 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:33.873782 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.930246 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:33.930282 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.445766 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:36.456503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:36.456576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:36.483872 1498704 cri.go:89] found id: ""
	I1217 02:10:36.483894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.483903 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:36.483909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:36.483970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:36.508742 1498704 cri.go:89] found id: ""
	I1217 02:10:36.508765 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.508774 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:36.508780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:36.508838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:36.535472 1498704 cri.go:89] found id: ""
	I1217 02:10:36.535511 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.535520 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:36.535527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:36.535591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:36.566274 1498704 cri.go:89] found id: ""
	I1217 02:10:36.566296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.566305 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:36.566311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:36.566372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:36.590882 1498704 cri.go:89] found id: ""
	I1217 02:10:36.590904 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.590912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:36.590918 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:36.590977 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:36.614768 1498704 cri.go:89] found id: ""
	I1217 02:10:36.614793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.614802 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:36.614808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:36.614889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:36.643752 1498704 cri.go:89] found id: ""
	I1217 02:10:36.643778 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.643787 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:36.643794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:36.643857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:36.672151 1498704 cri.go:89] found id: ""
	I1217 02:10:36.672177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.672186 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:36.672194 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:36.672208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:36.733511 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:36.733544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.752180 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:36.752255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:36.815443 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:36.815465 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:36.815478 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:36.840305 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:36.840349 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:39.373770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:39.386294 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:39.386380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:39.420073 1498704 cri.go:89] found id: ""
	I1217 02:10:39.420117 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.420126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:39.420132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:39.420210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:39.454303 1498704 cri.go:89] found id: ""
	I1217 02:10:39.454327 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.454338 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:39.454344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:39.454402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:39.483117 1498704 cri.go:89] found id: ""
	I1217 02:10:39.483143 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.483152 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:39.483159 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:39.483236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:39.507851 1498704 cri.go:89] found id: ""
	I1217 02:10:39.507927 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.507942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:39.507949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:39.508011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:39.535318 1498704 cri.go:89] found id: ""
	I1217 02:10:39.535344 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.535353 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:39.535359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:39.535460 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:39.559510 1498704 cri.go:89] found id: ""
	I1217 02:10:39.559587 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.559602 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:39.559610 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:39.559670 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:39.588446 1498704 cri.go:89] found id: ""
	I1217 02:10:39.588477 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.588487 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:39.588493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:39.588597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:39.616016 1498704 cri.go:89] found id: ""
	I1217 02:10:39.616041 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.616049 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:39.616058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:39.616069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:39.678516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:39.678553 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:39.698413 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:39.698440 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:39.766310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:39.766333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:39.766347 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:39.791602 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:39.791641 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:42.319919 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:42.330880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:42.330962 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:42.355776 1498704 cri.go:89] found id: ""
	I1217 02:10:42.355798 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.355807 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:42.355813 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:42.355872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:42.393050 1498704 cri.go:89] found id: ""
	I1217 02:10:42.393084 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.393093 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:42.393100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:42.393159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:42.426120 1498704 cri.go:89] found id: ""
	I1217 02:10:42.426157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.426166 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:42.426174 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:42.426245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:42.456881 1498704 cri.go:89] found id: ""
	I1217 02:10:42.456917 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.456926 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:42.456932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:42.456999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:42.481272 1498704 cri.go:89] found id: ""
	I1217 02:10:42.481298 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.481307 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:42.481312 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:42.481372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:42.506468 1498704 cri.go:89] found id: ""
	I1217 02:10:42.506497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.506506 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:42.506512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:42.506572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:42.531395 1498704 cri.go:89] found id: ""
	I1217 02:10:42.531460 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.531476 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:42.531484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:42.531552 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:42.555791 1498704 cri.go:89] found id: ""
	I1217 02:10:42.555814 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.555822 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:42.555831 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:42.555843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:42.611764 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:42.611800 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:42.627436 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:42.627463 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:42.717562 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:42.717584 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:42.717597 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:42.742727 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:42.742763 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:45.269723 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:45.281660 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:45.281736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:45.307916 1498704 cri.go:89] found id: ""
	I1217 02:10:45.307941 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.307950 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:45.307956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:45.308021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:45.337837 1498704 cri.go:89] found id: ""
	I1217 02:10:45.337862 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.337871 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:45.337878 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:45.337943 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:45.382867 1498704 cri.go:89] found id: ""
	I1217 02:10:45.382894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.382903 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:45.382909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:45.382970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:45.424600 1498704 cri.go:89] found id: ""
	I1217 02:10:45.424629 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.424637 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:45.424644 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:45.424707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:45.456469 1498704 cri.go:89] found id: ""
	I1217 02:10:45.456497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.456505 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:45.456511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:45.456574 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:45.482345 1498704 cri.go:89] found id: ""
	I1217 02:10:45.482370 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.482378 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:45.482385 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:45.482450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:45.507901 1498704 cri.go:89] found id: ""
	I1217 02:10:45.507930 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.507948 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:45.507955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:45.508065 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:45.532875 1498704 cri.go:89] found id: ""
	I1217 02:10:45.532896 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.532904 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:45.532913 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:45.532924 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:45.589239 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:45.589273 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:45.604011 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:45.604045 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:45.695710 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:45.695788 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:45.695808 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:45.721274 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:45.721310 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.251294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:48.261750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.261825 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.286414 1498704 cri.go:89] found id: ""
	I1217 02:10:48.286441 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.286450 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.286457 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.286515 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.315314 1498704 cri.go:89] found id: ""
	I1217 02:10:48.315336 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.315344 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.315351 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.315411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.339435 1498704 cri.go:89] found id: ""
	I1217 02:10:48.339461 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.339469 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.339476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.339543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.363969 1498704 cri.go:89] found id: ""
	I1217 02:10:48.364045 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.364061 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.364069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.364134 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.391387 1498704 cri.go:89] found id: ""
	I1217 02:10:48.391409 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.391418 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.391425 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.391489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.422985 1498704 cri.go:89] found id: ""
	I1217 02:10:48.423006 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.423014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.423021 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.423081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.451561 1498704 cri.go:89] found id: ""
	I1217 02:10:48.451588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.451598 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.451605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:48.451667 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:48.477573 1498704 cri.go:89] found id: ""
	I1217 02:10:48.477597 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.477607 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:48.477616 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:48.477627 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:48.503190 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:48.503227 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.531901 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:48.531927 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:48.590637 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:48.590670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:48.606410 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:48.606441 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:48.698001 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.198775 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:51.210128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:51.210207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:51.239455 1498704 cri.go:89] found id: ""
	I1217 02:10:51.239482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.239491 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:51.239504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:51.239587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:51.265468 1498704 cri.go:89] found id: ""
	I1217 02:10:51.265541 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.265565 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:51.265583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:51.265684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:51.290269 1498704 cri.go:89] found id: ""
	I1217 02:10:51.290294 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.290303 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:51.290310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:51.290403 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:51.315672 1498704 cri.go:89] found id: ""
	I1217 02:10:51.315697 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.315706 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:51.315712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:51.315775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:51.345852 1498704 cri.go:89] found id: ""
	I1217 02:10:51.345922 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.345938 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:51.345945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:51.346021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:51.374855 1498704 cri.go:89] found id: ""
	I1217 02:10:51.374884 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.374892 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:51.374899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:51.374967 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:51.408516 1498704 cri.go:89] found id: ""
	I1217 02:10:51.408553 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.408563 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:51.408569 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:51.408636 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:51.443401 1498704 cri.go:89] found id: ""
	I1217 02:10:51.443428 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.443436 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:51.443445 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:51.443474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:51.499872 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:51.499907 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:51.514690 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:51.514759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:51.581421 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.581455 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:51.581470 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:51.606921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:51.606964 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.151396 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:54.162403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:54.162479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:54.188307 1498704 cri.go:89] found id: ""
	I1217 02:10:54.188331 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.188340 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:54.188347 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:54.188411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:54.222781 1498704 cri.go:89] found id: ""
	I1217 02:10:54.222803 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.222818 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:54.222824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:54.222886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:54.251344 1498704 cri.go:89] found id: ""
	I1217 02:10:54.251415 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.251439 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:54.251451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:54.251535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:54.280867 1498704 cri.go:89] found id: ""
	I1217 02:10:54.280889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.280898 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:54.280904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:54.280966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:54.306150 1498704 cri.go:89] found id: ""
	I1217 02:10:54.306177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.306185 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:54.306192 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:54.306250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:54.330272 1498704 cri.go:89] found id: ""
	I1217 02:10:54.330296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.330310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:54.330317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:54.330375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:54.359393 1498704 cri.go:89] found id: ""
	I1217 02:10:54.359423 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.359431 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:54.359438 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:54.359525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:54.392745 1498704 cri.go:89] found id: ""
	I1217 02:10:54.392780 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.392804 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:54.392822 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:54.392835 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:54.469149 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:54.469171 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:54.469185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:54.495699 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:54.495738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.524004 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:54.524031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:54.579558 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:54.579592 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.095655 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:57.106067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:57.106145 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:57.130932 1498704 cri.go:89] found id: ""
	I1217 02:10:57.130961 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.130970 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:57.130976 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:57.131046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:57.160073 1498704 cri.go:89] found id: ""
	I1217 02:10:57.160098 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.160107 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:57.160113 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:57.160173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:57.184768 1498704 cri.go:89] found id: ""
	I1217 02:10:57.184793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.184802 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:57.184808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:57.184867 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:57.210332 1498704 cri.go:89] found id: ""
	I1217 02:10:57.210358 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.210367 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:57.210374 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:57.210457 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:57.234920 1498704 cri.go:89] found id: ""
	I1217 02:10:57.234984 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.234999 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:57.235007 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:57.235072 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:57.260151 1498704 cri.go:89] found id: ""
	I1217 02:10:57.260183 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.260193 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:57.260201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:57.260310 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:57.287966 1498704 cri.go:89] found id: ""
	I1217 02:10:57.288000 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.288009 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:57.288032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:57.288115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:57.312191 1498704 cri.go:89] found id: ""
	I1217 02:10:57.312252 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.312284 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:57.312306 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:57.312330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:57.344168 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:57.344196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:57.400635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:57.400672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.416567 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:57.416594 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:57.485990 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:57.486013 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:57.486028 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.011650 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:00.083065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:00.083205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:00.177092 1498704 cri.go:89] found id: ""
	I1217 02:11:00.177120 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.177129 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:00.177137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:00.177210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:00.240557 1498704 cri.go:89] found id: ""
	I1217 02:11:00.240645 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.240670 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:00.240689 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:00.240818 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:00.290983 1498704 cri.go:89] found id: ""
	I1217 02:11:00.291075 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.291101 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:00.291120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:00.291245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:00.339816 1498704 cri.go:89] found id: ""
	I1217 02:11:00.339906 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.339935 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:00.339955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:00.340060 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:00.400482 1498704 cri.go:89] found id: ""
	I1217 02:11:00.400508 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.400516 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:00.400525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:00.400594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:00.437316 1498704 cri.go:89] found id: ""
	I1217 02:11:00.437386 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.437413 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:00.437432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:00.437531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:00.464791 1498704 cri.go:89] found id: ""
	I1217 02:11:00.464859 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.464881 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:00.464899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:00.464986 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:00.492400 1498704 cri.go:89] found id: ""
	I1217 02:11:00.492468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.492492 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:00.492514 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:00.492551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:00.549202 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:00.549237 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:00.564046 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:00.564073 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:00.636379 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:00.636409 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:00.636423 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.666039 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:00.666076 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.197992 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:03.209540 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:03.209610 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:03.237337 1498704 cri.go:89] found id: ""
	I1217 02:11:03.237411 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.237436 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:03.237458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:03.237545 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:03.262191 1498704 cri.go:89] found id: ""
	I1217 02:11:03.262213 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.262221 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:03.262228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:03.262286 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:03.286816 1498704 cri.go:89] found id: ""
	I1217 02:11:03.286840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.286850 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:03.286856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:03.286915 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:03.310933 1498704 cri.go:89] found id: ""
	I1217 02:11:03.311007 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.311023 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:03.311031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:03.311089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:03.334605 1498704 cri.go:89] found id: ""
	I1217 02:11:03.334628 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.334637 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:03.334643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:03.334701 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:03.359646 1498704 cri.go:89] found id: ""
	I1217 02:11:03.359681 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.359690 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:03.359697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:03.359789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:03.391919 1498704 cri.go:89] found id: ""
	I1217 02:11:03.391946 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.391955 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:03.391962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:03.392025 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:03.419543 1498704 cri.go:89] found id: ""
	I1217 02:11:03.419567 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.419576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:03.419586 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:03.419600 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.455897 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:03.455925 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:03.512216 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:03.512255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:03.527344 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:03.527372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:03.591374 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:03.591396 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:03.591408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.117735 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:06.128394 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:06.128466 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:06.155397 1498704 cri.go:89] found id: ""
	I1217 02:11:06.155420 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.155430 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:06.155436 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:06.155669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:06.185554 1498704 cri.go:89] found id: ""
	I1217 02:11:06.185631 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.185682 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:06.185697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:06.185769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:06.214540 1498704 cri.go:89] found id: ""
	I1217 02:11:06.214564 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.214573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:06.214579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:06.214637 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:06.240468 1498704 cri.go:89] found id: ""
	I1217 02:11:06.240492 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.240501 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:06.240507 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:06.240570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:06.266674 1498704 cri.go:89] found id: ""
	I1217 02:11:06.266697 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.266706 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:06.266712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:06.266781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:06.292194 1498704 cri.go:89] found id: ""
	I1217 02:11:06.292218 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.292227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:06.292233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:06.292295 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:06.320979 1498704 cri.go:89] found id: ""
	I1217 02:11:06.321002 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.321011 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:06.321017 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:06.321074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:06.347269 1498704 cri.go:89] found id: ""
	I1217 02:11:06.347294 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.347303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:06.347315 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:06.347326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:06.409046 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:06.409101 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:06.425379 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:06.425406 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.490322 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.490345 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:06.490357 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.515786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:06.515825 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.043785 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:09.054506 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:09.054580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:09.079819 1498704 cri.go:89] found id: ""
	I1217 02:11:09.079848 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.079856 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:09.079862 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:09.079921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:09.104928 1498704 cri.go:89] found id: ""
	I1217 02:11:09.104953 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.104963 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:09.104969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:09.105031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:09.130212 1498704 cri.go:89] found id: ""
	I1217 02:11:09.130238 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.130246 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:09.130255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:09.130358 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:09.159130 1498704 cri.go:89] found id: ""
	I1217 02:11:09.159153 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.159162 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:09.159169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:09.159245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:09.184267 1498704 cri.go:89] found id: ""
	I1217 02:11:09.184292 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.184301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:09.184307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:09.184371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:09.209170 1498704 cri.go:89] found id: ""
	I1217 02:11:09.209195 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.209204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:09.209210 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:09.209271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:09.235842 1498704 cri.go:89] found id: ""
	I1217 02:11:09.235869 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.235878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:09.235884 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:09.235946 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:09.265413 1498704 cri.go:89] found id: ""
	I1217 02:11:09.265445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.265454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:09.265463 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.265475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.302759 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:09.302784 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:09.358361 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:09.358394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:09.378248 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:09.378278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.451227 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.451247 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:09.451260 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:11.977784 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.988725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:11.988798 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:12.015755 1498704 cri.go:89] found id: ""
	I1217 02:11:12.015778 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.015788 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:12.015795 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:12.015866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:12.042225 1498704 cri.go:89] found id: ""
	I1217 02:11:12.042250 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.042259 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:12.042269 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:12.042328 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:12.067951 1498704 cri.go:89] found id: ""
	I1217 02:11:12.067977 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.067987 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:12.067993 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:12.068054 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:12.094539 1498704 cri.go:89] found id: ""
	I1217 02:11:12.094565 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.094574 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:12.094580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:12.094641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:12.120422 1498704 cri.go:89] found id: ""
	I1217 02:11:12.120445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.120454 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:12.120461 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:12.120521 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:12.146437 1498704 cri.go:89] found id: ""
	I1217 02:11:12.146465 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.146491 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:12.146498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:12.146560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:12.171817 1498704 cri.go:89] found id: ""
	I1217 02:11:12.171840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.171849 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:12.171855 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:12.171914 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:12.200987 1498704 cri.go:89] found id: ""
	I1217 02:11:12.201013 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.201022 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:12.201031 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.201043 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:12.232701 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:12.232731 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.288687 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.288722 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.303401 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.303479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.371087 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.371112 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:12.371125 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:14.899732 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.913037 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:14.913112 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:14.939368 1498704 cri.go:89] found id: ""
	I1217 02:11:14.939399 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.939408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.939415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:14.939476 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:14.964809 1498704 cri.go:89] found id: ""
	I1217 02:11:14.964835 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.964844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.964849 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:14.964911 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:14.992442 1498704 cri.go:89] found id: ""
	I1217 02:11:14.992468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.992477 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.992483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:14.992542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:15.029492 1498704 cri.go:89] found id: ""
	I1217 02:11:15.029518 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.029527 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:15.029534 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:15.029604 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:15.059736 1498704 cri.go:89] found id: ""
	I1217 02:11:15.059760 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.059770 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:15.059776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:15.059841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:15.086908 1498704 cri.go:89] found id: ""
	I1217 02:11:15.086991 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.087014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:15.087029 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:15.087104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:15.113800 1498704 cri.go:89] found id: ""
	I1217 02:11:15.113829 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.113838 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.113844 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:15.113903 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:15.140421 1498704 cri.go:89] found id: ""
	I1217 02:11:15.140445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.140454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.140463 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.140475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.197971 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.198003 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.213157 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.213186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.278282 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.278303 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:15.278316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:15.303867 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.303900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.833800 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.844470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:17.844546 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:17.871228 1498704 cri.go:89] found id: ""
	I1217 02:11:17.871254 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.871262 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.871270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:17.871345 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:17.909403 1498704 cri.go:89] found id: ""
	I1217 02:11:17.909430 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.909438 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.909444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:17.909505 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:17.942319 1498704 cri.go:89] found id: ""
	I1217 02:11:17.942341 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.942348 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.942355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:17.942416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:17.967521 1498704 cri.go:89] found id: ""
	I1217 02:11:17.967546 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.967554 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.967561 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:17.967619 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:17.995465 1498704 cri.go:89] found id: ""
	I1217 02:11:17.995488 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.995518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:17.995526 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:17.995587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:18.023559 1498704 cri.go:89] found id: ""
	I1217 02:11:18.023587 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.023596 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.023603 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:18.023664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:18.049983 1498704 cri.go:89] found id: ""
	I1217 02:11:18.050011 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.050027 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.050033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:18.050096 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:18.081999 1498704 cri.go:89] found id: ""
	I1217 02:11:18.082023 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.082033 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.082042 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.082054 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.096662 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.096692 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.160156 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.160179 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:18.160192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:18.185291 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.185325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.216271 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.216298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.775311 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:20.789631 1498704 out.go:203] 
	W1217 02:11:20.792902 1498704 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:11:20.792939 1498704 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:11:20.792950 1498704 out.go:285] * Related issues:
	W1217 02:11:20.792967 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:11:20.792986 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:11:20.795906 1498704 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212356563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212424346Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212528511Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212600537Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212667581Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212731344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212789486Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212848654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212916946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213001919Z" level=info msg="Connect containerd service"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213359100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.214132836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224058338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224260137Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224195259Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.233004319Z" level=info msg="Start recovering state"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.265931194Z" level=info msg="Start event monitor"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266119036Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266183250Z" level=info msg="Start streaming server"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266253167Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266318809Z" level=info msg="runtime interface starting up..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266375187Z" level=info msg="starting plugins..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266454539Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:05:19 newest-cni-456492 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.268086737Z" level=info msg="containerd successfully booted in 0.090817s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.201306   13730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.202054   13730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.203958   13730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.204477   13730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.206000   13730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:11:30 up  7:54,  0 user,  load average: 0.37, 0.69, 1.20
	Linux newest-cni-456492 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:11:26 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:26 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:26 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:27 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:27 newest-cni-456492 kubelet[13576]: E1217 02:11:27.719692   13576 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:27 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:27 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:28 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 17 02:11:28 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:28 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:28 newest-cni-456492 kubelet[13613]: E1217 02:11:28.442480   13613 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:28 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:28 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:29 newest-cni-456492 kubelet[13633]: E1217 02:11:29.181363   13633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:29 newest-cni-456492 kubelet[13669]: E1217 02:11:29.942439   13669 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:29 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (350.787108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-456492" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-456492
helpers_test.go:244: (dbg) docker inspect newest-cni-456492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	        "Created": "2025-12-17T01:55:16.478266179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1498839,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:13.106483917Z",
	            "FinishedAt": "2025-12-17T02:05:11.800057613Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/hosts",
	        "LogPath": "/var/lib/docker/containers/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2/72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2-json.log",
	        "Name": "/newest-cni-456492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-456492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-456492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c4fe7eb78490c86ef7f733334d1b02f6f4cb98414c805d9159a51b973cc7d2",
	                "LowerDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8b7b9388b01c546c016e7eea89b431774a39376ecd64a6dde1e693dd84d300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456492",
	                "Source": "/var/lib/docker/volumes/newest-cni-456492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456492",
	                "name.minikube.sigs.k8s.io": "newest-cni-456492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab62f167f6067cd4de4467e8c5dccfa413a051915ec69dabeccc65bc59cf0aee",
	            "SandboxKey": "/var/run/docker/netns/ab62f167f606",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-456492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:ab:b6:47:86:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78c732410c8ee8b3c147900aac111eb07f35c057f64efcecb5d20570fed785bc",
	                    "EndpointID": "c3b1f12eab3f1b8581f7a3375c215b8790019ebdc7d258d9fd03a25fc5d36dd1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456492",
	                        "72c4fe7eb784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (332.410052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-456492 logs -n 25: (1.593163133s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p default-k8s-diff-port-069646                                                                                                                                                                                                                            │ default-k8s-diff-port-069646 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p disable-driver-mounts-743315                                                                                                                                                                                                                            │ disable-driver-mounts-743315 │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ stop    │ -p embed-certs-608379 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:54 UTC │
	│ image   │ embed-certs-608379 image list --format=json                                                                                                                                                                                                                │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ pause   │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ unpause │ -p embed-certs-608379 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ delete  │ -p embed-certs-608379                                                                                                                                                                                                                                      │ embed-certs-608379           │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:01 UTC │                     │
	│ stop    │ -p no-preload-178365 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ addons  │ enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p no-preload-178365 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-178365            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p newest-cni-456492 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-456492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p newest-cni-456492 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ image   │ newest-cni-456492 image list --format=json                                                                                                                                                                                                                 │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	│ pause   │ -p newest-cni-456492 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	│ unpause │ -p newest-cni-456492 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-456492            │ jenkins │ v1.37.0 │ 17 Dec 25 02:11 UTC │ 17 Dec 25 02:11 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:12.850501 1498704 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:05:12.850637 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.850649 1498704 out.go:374] Setting ErrFile to fd 2...
	I1217 02:05:12.850655 1498704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:12.851041 1498704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:05:12.851511 1498704 out.go:368] Setting JSON to false
	I1217 02:05:12.852479 1498704 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28063,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:05:12.852572 1498704 start.go:143] virtualization:  
	I1217 02:05:12.855474 1498704 out.go:179] * [newest-cni-456492] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:05:12.857672 1498704 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:12.857773 1498704 notify.go:221] Checking for updates...
	I1217 02:05:12.863254 1498704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:12.866037 1498704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:12.868948 1498704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:05:12.871863 1498704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:05:12.874787 1498704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:12.878103 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:12.878662 1498704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:12.900447 1498704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:05:12.900598 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:12.960234 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:12.950894493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:12.960347 1498704 docker.go:319] overlay module found
	I1217 02:05:12.963370 1498704 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:12.966210 1498704 start.go:309] selected driver: docker
	I1217 02:05:12.966233 1498704 start.go:927] validating driver "docker" against &{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:12.966382 1498704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:12.967091 1498704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:13.019814 1498704 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:05:13.010546439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:05:13.020178 1498704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:05:13.020210 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:13.020262 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:13.020307 1498704 start.go:353] cluster config:
	{Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:13.023434 1498704 out.go:179] * Starting "newest-cni-456492" primary control-plane node in "newest-cni-456492" cluster
	I1217 02:05:13.026234 1498704 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:05:13.029131 1498704 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:13.031994 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:13.032048 1498704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1217 02:05:13.032060 1498704 cache.go:65] Caching tarball of preloaded images
	I1217 02:05:13.032113 1498704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:13.032150 1498704 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:05:13.032162 1498704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1217 02:05:13.032281 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.052501 1498704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:13.052525 1498704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:13.052542 1498704 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:13.052572 1498704 start.go:360] acquireMachinesLock for newest-cni-456492: {Name:mka8782258556ee88dcf89b45436bfbb3b48383d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:13.052633 1498704 start.go:364] duration metric: took 38.597µs to acquireMachinesLock for "newest-cni-456492"
	I1217 02:05:13.052657 1498704 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:13.052663 1498704 fix.go:54] fixHost starting: 
	I1217 02:05:13.052926 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.069585 1498704 fix.go:112] recreateIfNeeded on newest-cni-456492: state=Stopped err=<nil>
	W1217 02:05:13.069617 1498704 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 02:05:11.635157 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:14.135122 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:16.135221 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:13.072747 1498704 out.go:252] * Restarting existing docker container for "newest-cni-456492" ...
	I1217 02:05:13.072837 1498704 cli_runner.go:164] Run: docker start newest-cni-456492
	I1217 02:05:13.388698 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:13.414091 1498704 kic.go:430] container "newest-cni-456492" state is running.
	I1217 02:05:13.414525 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:13.433261 1498704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/config.json ...
	I1217 02:05:13.433961 1498704 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:13.434162 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:13.455043 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:13.455367 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:13.455376 1498704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:13.456190 1498704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:16.589394 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.589424 1498704 ubuntu.go:182] provisioning hostname "newest-cni-456492"
	I1217 02:05:16.589509 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.608291 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.608611 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.608628 1498704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456492 && echo "newest-cni-456492" | sudo tee /etc/hostname
	I1217 02:05:16.748318 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456492
	
	I1217 02:05:16.748417 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:16.766749 1498704 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:16.767082 1498704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34259 <nil> <nil>}
	I1217 02:05:16.767106 1498704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456492/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:16.899757 1498704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:16.899788 1498704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:05:16.899820 1498704 ubuntu.go:190] setting up certificates
	I1217 02:05:16.899839 1498704 provision.go:84] configureAuth start
	I1217 02:05:16.899906 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:16.924665 1498704 provision.go:143] copyHostCerts
	I1217 02:05:16.924743 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:05:16.924752 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:05:16.924828 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:05:16.924938 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:05:16.924943 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:05:16.924976 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:05:16.925038 1498704 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:05:16.925047 1498704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:05:16.925072 1498704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:05:16.925127 1498704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456492 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-456492]
	I1217 02:05:17.601803 1498704 provision.go:177] copyRemoteCerts
	I1217 02:05:17.601873 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:17.601926 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.636357 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.741722 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:05:17.761034 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:17.779707 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:17.797837 1498704 provision.go:87] duration metric: took 897.968313ms to configureAuth
	I1217 02:05:17.797870 1498704 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:17.798087 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:17.798100 1498704 machine.go:97] duration metric: took 4.364124237s to provisionDockerMachine
	I1217 02:05:17.798118 1498704 start.go:293] postStartSetup for "newest-cni-456492" (driver="docker")
	I1217 02:05:17.798134 1498704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:17.798198 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:17.798254 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.815970 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:17.909838 1498704 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:17.913351 1498704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:17.913383 1498704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:17.913395 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:05:17.913453 1498704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:05:17.913544 1498704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:05:17.913681 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:17.921360 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:17.939679 1498704 start.go:296] duration metric: took 141.5414ms for postStartSetup
	I1217 02:05:17.939826 1498704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:17.939877 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:17.957594 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.059706 1498704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:18.065122 1498704 fix.go:56] duration metric: took 5.012436797s for fixHost
	I1217 02:05:18.065156 1498704 start.go:83] releasing machines lock for "newest-cni-456492", held for 5.012509749s
	I1217 02:05:18.065242 1498704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456492
	I1217 02:05:18.082756 1498704 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:18.082825 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.083064 1498704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:05:18.083126 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:18.102210 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.102306 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:18.193581 1498704 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:18.286865 1498704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:18.291506 1498704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:18.291604 1498704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:18.301001 1498704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:18.301023 1498704 start.go:496] detecting cgroup driver to use...
	I1217 02:05:18.301056 1498704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:18.301104 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:05:18.318916 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:18.332388 1498704 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:05:18.332450 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:05:18.348560 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:05:18.361841 1498704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:05:18.501489 1498704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:05:18.625467 1498704 docker.go:234] disabling docker service ...
	I1217 02:05:18.625544 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:05:18.642408 1498704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:05:18.656014 1498704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:05:18.765362 1498704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:05:18.886790 1498704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:18.900617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:18.915221 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:18.924900 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:18.934313 1498704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:18.934389 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:18.943795 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.953183 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:18.962127 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:18.971122 1498704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:18.979419 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:18.988380 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:18.999817 1498704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:19.010244 1498704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:19.018996 1498704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:19.026929 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.133908 1498704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:19.268405 1498704 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:05:19.268490 1498704 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:05:19.272284 1498704 start.go:564] Will wait 60s for crictl version
	I1217 02:05:19.272347 1498704 ssh_runner.go:195] Run: which crictl
	I1217 02:05:19.275756 1498704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:19.301130 1498704 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:05:19.301201 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.322372 1498704 ssh_runner.go:195] Run: containerd --version
	I1217 02:05:19.348617 1498704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1217 02:05:19.351633 1498704 cli_runner.go:164] Run: docker network inspect newest-cni-456492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:05:19.367774 1498704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:05:19.371830 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.384786 1498704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:05:19.387816 1498704 kubeadm.go:884] updating cluster {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:19.387972 1498704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1217 02:05:19.388067 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.414283 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.414309 1498704 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:05:19.414396 1498704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:05:19.439246 1498704 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:05:19.439272 1498704 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:19.439280 1498704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1217 02:05:19.439400 1498704 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-456492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:19.439475 1498704 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:05:19.464932 1498704 cni.go:84] Creating CNI manager for ""
	I1217 02:05:19.464957 1498704 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 02:05:19.464978 1498704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:05:19.465000 1498704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456492 NodeName:newest-cni-456492 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:19.465118 1498704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-456492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:19.465204 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:19.473220 1498704 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:19.473323 1498704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:19.481191 1498704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 02:05:19.494733 1498704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:19.508255 1498704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 02:05:19.521299 1498704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:19.524923 1498704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:19.534869 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:19.640328 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:19.658104 1498704 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492 for IP: 192.168.85.2
	I1217 02:05:19.658171 1498704 certs.go:195] generating shared ca certs ...
	I1217 02:05:19.658202 1498704 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:19.658408 1498704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:05:19.658487 1498704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:05:19.658525 1498704 certs.go:257] generating profile certs ...
	I1217 02:05:19.658693 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/client.key
	I1217 02:05:19.658805 1498704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key.0ff7556d
	I1217 02:05:19.658882 1498704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key
	I1217 02:05:19.659021 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:05:19.659079 1498704 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:19.659103 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:05:19.659164 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:05:19.659220 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:05:19.659286 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:05:19.659364 1498704 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:05:19.660007 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:19.680759 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:05:19.702848 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:19.724636 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:05:19.743745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:19.766745 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:19.785567 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:19.805217 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/newest-cni-456492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 02:05:19.823885 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:05:19.842565 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:05:19.861136 1498704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:19.881009 1498704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:19.900011 1498704 ssh_runner.go:195] Run: openssl version
	I1217 02:05:19.907885 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.916589 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:19.925294 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929759 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.929879 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:19.973048 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:19.981056 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:05:19.988859 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:05:19.996704 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001580 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.001857 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:05:20.047306 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:20.055839 1498704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.063938 1498704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:05:20.072095 1498704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076535 1498704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.076605 1498704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:05:20.118765 1498704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:20.126976 1498704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:20.131206 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:20.172934 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:20.214362 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:20.255854 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:20.297036 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:20.339864 1498704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:20.381722 1498704 kubeadm.go:401] StartCluster: {Name:newest-cni-456492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:20.381822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:05:20.381904 1498704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:05:20.424644 1498704 cri.go:89] found id: ""
	I1217 02:05:20.424764 1498704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:20.433427 1498704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:20.433456 1498704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:20.433550 1498704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:20.441251 1498704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:20.442099 1498704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456492" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.442456 1498704 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1208015/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456492" cluster setting kubeconfig missing "newest-cni-456492" context setting]
	I1217 02:05:20.442986 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.445078 1498704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:20.453918 1498704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:05:20.453968 1498704 kubeadm.go:602] duration metric: took 20.505601ms to restartPrimaryControlPlane
	I1217 02:05:20.453978 1498704 kubeadm.go:403] duration metric: took 72.266987ms to StartCluster
	I1217 02:05:20.453993 1498704 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.454058 1498704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:05:20.454938 1498704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:20.455145 1498704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:05:20.455516 1498704 config.go:182] Loaded profile config "newest-cni-456492": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:20.455530 1498704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:20.455683 1498704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456492"
	I1217 02:05:20.455704 1498704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456492"
	I1217 02:05:20.455734 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456291 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.456447 1498704 addons.go:70] Setting dashboard=true in profile "newest-cni-456492"
	I1217 02:05:20.456459 1498704 addons.go:239] Setting addon dashboard=true in "newest-cni-456492"
	W1217 02:05:20.456465 1498704 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:20.456487 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.456873 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.457295 1498704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456492"
	I1217 02:05:20.457327 1498704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456492"
	I1217 02:05:20.457617 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.460758 1498704 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:20.464032 1498704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:20.511072 1498704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:20.511238 1498704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:20.511526 1498704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456492"
	I1217 02:05:20.511584 1498704 host.go:66] Checking if "newest-cni-456492" exists ...
	I1217 02:05:20.512215 1498704 cli_runner.go:164] Run: docker container inspect newest-cni-456492 --format={{.State.Status}}
	I1217 02:05:20.514400 1498704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.514426 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:20.514495 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.517419 1498704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 02:05:18.635204 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:21.135093 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:20.520345 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:20.520380 1498704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:20.520470 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.545933 1498704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.545958 1498704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:20.546028 1498704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456492
	I1217 02:05:20.571506 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.597655 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.610038 1498704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/newest-cni-456492/id_rsa Username:docker}
	I1217 02:05:20.744231 1498704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:20.749535 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:20.770211 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:20.807578 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:20.807656 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:20.822894 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:20.822966 1498704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:20.838508 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:20.838583 1498704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:20.854473 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:20.854546 1498704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:20.870442 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:20.870510 1498704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:20.892689 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:20.892763 1498704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:20.907212 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:20.907283 1498704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 02:05:20.920377 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:20.920447 1498704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:20.934242 1498704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:20.934313 1498704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:20.949356 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:21.122136 1498704 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:05:21.122238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:21.122377 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122428 1498704 retry.go:31] will retry after 140.698925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122498 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122514 1498704 retry.go:31] will retry after 200.872114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.122730 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.122750 1498704 retry.go:31] will retry after 347.753215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.264115 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.324524 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:21.326955 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.326987 1498704 retry.go:31] will retry after 509.503403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:21.390952 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.391056 1498704 retry.go:31] will retry after 486.50092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.471226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.536155 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.536193 1498704 retry.go:31] will retry after 374.340896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.623199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:21.836797 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:21.878378 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:21.911452 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:21.932525 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:21.932573 1498704 retry.go:31] will retry after 673.446858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.024062 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.024104 1498704 retry.go:31] will retry after 357.640722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:22.030810 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.030855 1498704 retry.go:31] will retry after 697.108634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.122842 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:22.382402 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.447494 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.447529 1498704 retry.go:31] will retry after 907.58474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.606794 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:22.623237 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:22.712284 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.712316 1498704 retry.go:31] will retry after 1.166453431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.728640 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.790257 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.790294 1498704 retry.go:31] will retry after 693.242896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:23.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:25.634571 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:23.122710 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.356122 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:23.441808 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.441876 1498704 retry.go:31] will retry after 812.660244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.484193 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:23.553009 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.553088 1498704 retry.go:31] will retry after 1.540590446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:23.878932 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:23.940625 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:23.940657 1498704 retry.go:31] will retry after 1.715347401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.123129 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:24.255570 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.318166 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.318201 1498704 retry.go:31] will retry after 2.528105033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.622416 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.094702 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.122740 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:25.190434 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.190468 1498704 retry.go:31] will retry after 2.137532007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.622874 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:25.656976 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.735191 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.735228 1498704 retry.go:31] will retry after 1.824141068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.122718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.622402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:26.847039 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:26.915825 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:26.915864 1498704 retry.go:31] will retry after 3.628983163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.123109 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:27.329106 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:27.406949 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.406981 1498704 retry.go:31] will retry after 4.03347247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.560441 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:27.620941 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.620972 1498704 retry.go:31] will retry after 3.991176553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:27.623048 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:27.635077 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:29.635231 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:28.123323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:28.622690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.123056 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:29.622383 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.122331 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:30.545057 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:30.621785 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.621822 1498704 retry.go:31] will retry after 4.4452238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.622853 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.122373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:31.440743 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:31.509992 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.510031 1498704 retry.go:31] will retry after 5.407597033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.613135 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:31.622584 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:31.697739 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.697776 1498704 retry.go:31] will retry after 2.825488937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:32.122427 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:32.622356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:32.134521 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:34.135119 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:36.135210 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:33.122865 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:33.622376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.122833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:34.523532 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:34.583134 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.583163 1498704 retry.go:31] will retry after 5.545323918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:34.622442 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:35.068147 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:35.122850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:35.134133 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.134169 1498704 retry.go:31] will retry after 4.861802964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:35.622377 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.122369 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.622378 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:36.918683 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:36.978447 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.978481 1498704 retry.go:31] will retry after 6.962519237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:37.122560 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:37.622836 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:38.635154 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:41.134707 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:38.122524 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:38.622862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.122871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.623166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:39.996206 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:40.063255 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.063292 1498704 retry.go:31] will retry after 7.781680021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.122526 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:40.129164 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:40.214505 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.214533 1498704 retry.go:31] will retry after 8.678807682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:40.622298 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.122333 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:41.622358 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.127159 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:42.622438 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:43.635439 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:46.135272 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:43.122461 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.622352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:43.941994 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:44.001689 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.001730 1498704 retry.go:31] will retry after 6.066883065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:44.123123 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:44.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.126164 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:45.623052 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.122898 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:46.622334 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.122393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.622323 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:47.845223 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:48.634542 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:50.635081 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:47.908667 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:47.908705 1498704 retry.go:31] will retry after 18.007710991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.122861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.622412 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:48.894229 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:48.969090 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:48.969125 1498704 retry.go:31] will retry after 16.055685136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.122381 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:49.622837 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:50.069336 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:50.122996 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:50.134357 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.134397 1498704 retry.go:31] will retry after 18.576318696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.622399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.122356 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:51.623152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.122522 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:52.622365 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:53.135083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:05:55.135448 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:53.123228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:53.622373 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:54.622394 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.122388 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:55.622375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.122434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:56.622357 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.122345 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:57.622407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:05:57.635130 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:00.134795 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:05:58.122690 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:58.622871 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.122944 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:05:59.622822 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.123626 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:00.623133 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.122517 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:01.622861 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.122995 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:02.622415 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:02.135223 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:04.634982 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:03.122366 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:03.623001 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.122805 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:04.622382 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.025226 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:05.088234 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.088268 1498704 retry.go:31] will retry after 18.521411157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.122353 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.622518 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:05.916578 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:05.977704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:05.977737 1498704 retry.go:31] will retry after 29.235613176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.123051 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:06.623116 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.122863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:07.622361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:07.134988 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:09.135112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:11.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:08.123131 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.622326 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:08.711597 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:08.773115 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:08.773147 1498704 retry.go:31] will retry after 24.92518591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:09.122643 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:09.622393 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.122375 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:10.622634 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.122959 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:11.622850 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.122346 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:12.622435 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:13.634975 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:16.134662 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:13.122648 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:13.622828 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.123317 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:14.622872 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.122361 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:15.622296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.122862 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:16.622835 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.122778 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:17.622329 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:06:18.135126 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:20.135188 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:18.123152 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:18.623163 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.122407 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:19.622841 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.123196 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:20.622898 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:20.622982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:20.655063 1498704 cri.go:89] found id: ""
	I1217 02:06:20.655091 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.655100 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:20.655106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:20.655169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:20.687901 1498704 cri.go:89] found id: ""
	I1217 02:06:20.687924 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.687932 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:20.687938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:20.687996 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:20.713818 1498704 cri.go:89] found id: ""
	I1217 02:06:20.713845 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.713854 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:20.713860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:20.713918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:20.738353 1498704 cri.go:89] found id: ""
	I1217 02:06:20.738376 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.738384 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:20.738396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:20.738455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:20.763275 1498704 cri.go:89] found id: ""
	I1217 02:06:20.763300 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.763309 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:20.763316 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:20.763377 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:20.787303 1498704 cri.go:89] found id: ""
	I1217 02:06:20.787328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.787337 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:20.787343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:20.787402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:20.812203 1498704 cri.go:89] found id: ""
	I1217 02:06:20.812230 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.812238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:20.812244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:20.812304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:20.836788 1498704 cri.go:89] found id: ""
	I1217 02:06:20.836814 1498704 logs.go:282] 0 containers: []
	W1217 02:06:20.836823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:20.836831 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:20.836842 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:20.901301 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:20.892214    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.893004    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.894881    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.895590    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:20.897310    1836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:20.901324 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:20.901337 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:20.927207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:20.927244 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:20.955351 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:20.955377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:21.010892 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:21.010928 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:22.635190 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:25.135234 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:23.526340 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:23.536950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:23.537021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:23.561240 1498704 cri.go:89] found id: ""
	I1217 02:06:23.561267 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.561276 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:23.561282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:23.561340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:23.586385 1498704 cri.go:89] found id: ""
	I1217 02:06:23.586407 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.586415 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:23.586421 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:23.586479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:23.610820 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:06:23.612177 1498704 cri.go:89] found id: ""
	I1217 02:06:23.612201 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.612210 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:23.612216 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:23.612270 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1217 02:06:23.698147 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698227 1498704 retry.go:31] will retry after 35.769421328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:23.698299 1498704 cri.go:89] found id: ""
	I1217 02:06:23.698328 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.698348 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:23.698379 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:23.698473 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:23.730479 1498704 cri.go:89] found id: ""
	I1217 02:06:23.730555 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.730569 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:23.730577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:23.730656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:23.757694 1498704 cri.go:89] found id: ""
	I1217 02:06:23.757717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.757726 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:23.757732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:23.757802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:23.787070 1498704 cri.go:89] found id: ""
	I1217 02:06:23.787145 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.787162 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:23.787170 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:23.787231 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:23.815895 1498704 cri.go:89] found id: ""
	I1217 02:06:23.815928 1498704 logs.go:282] 0 containers: []
	W1217 02:06:23.815937 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:23.815947 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:23.815977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:23.845530 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:23.845558 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:23.904348 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:23.904385 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:23.919409 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:23.919438 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:23.986183 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:23.977453    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.978260    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.979840    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.980504    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:23.982166    1973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:23.986246 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:23.986266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:26.512910 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:26.523572 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:26.523644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:26.549045 1498704 cri.go:89] found id: ""
	I1217 02:06:26.549077 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.549087 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:26.549100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:26.549181 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:26.573386 1498704 cri.go:89] found id: ""
	I1217 02:06:26.573409 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.573417 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:26.573423 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:26.573485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:26.597629 1498704 cri.go:89] found id: ""
	I1217 02:06:26.597673 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.597688 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:26.597695 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:26.597755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:26.625905 1498704 cri.go:89] found id: ""
	I1217 02:06:26.625933 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.625942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:26.625949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:26.626016 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:26.663442 1498704 cri.go:89] found id: ""
	I1217 02:06:26.663466 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.663475 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:26.663482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:26.663565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:26.692315 1498704 cri.go:89] found id: ""
	I1217 02:06:26.692342 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.692351 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:26.692362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:26.692422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:26.718259 1498704 cri.go:89] found id: ""
	I1217 02:06:26.718287 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.718296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:26.718303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:26.718361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:26.743360 1498704 cri.go:89] found id: ""
	I1217 02:06:26.743383 1498704 logs.go:282] 0 containers: []
	W1217 02:06:26.743391 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:26.743400 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:26.743412 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:26.770132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:26.770158 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:26.829657 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:26.829749 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:26.845511 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:26.845538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:26.912984 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:26.904906    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.905559    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907112    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.907601    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:26.909094    2085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:26.913004 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:26.913017 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:27.635261 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:30.135207 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:29.440066 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:29.450548 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:29.450621 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:29.474768 1498704 cri.go:89] found id: ""
	I1217 02:06:29.474800 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.474809 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:29.474816 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:29.474886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:29.498947 1498704 cri.go:89] found id: ""
	I1217 02:06:29.498969 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.498977 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:29.498983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:29.499041 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:29.523540 1498704 cri.go:89] found id: ""
	I1217 02:06:29.523564 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.523573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:29.523579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:29.523643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:29.556044 1498704 cri.go:89] found id: ""
	I1217 02:06:29.556069 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.556078 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:29.556084 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:29.556144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:29.581373 1498704 cri.go:89] found id: ""
	I1217 02:06:29.581399 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.581408 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:29.581414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:29.581485 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:29.607453 1498704 cri.go:89] found id: ""
	I1217 02:06:29.607479 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.607489 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:29.607495 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:29.607576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:29.639841 1498704 cri.go:89] found id: ""
	I1217 02:06:29.639865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.639875 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:29.639881 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:29.639938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:29.670608 1498704 cri.go:89] found id: ""
	I1217 02:06:29.670635 1498704 logs.go:282] 0 containers: []
	W1217 02:06:29.670643 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:29.670653 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:29.670665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:29.728148 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:29.728181 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:29.743004 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:29.743029 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:29.815740 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:29.806960    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.807770    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.809571    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.810115    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:29.811798    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:29.815762 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:29.815775 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:29.842206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:29.842243 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:32.370825 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:32.383399 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:32.383490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:32.416122 1498704 cri.go:89] found id: ""
	I1217 02:06:32.416148 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.416157 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:32.416164 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:32.416235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:32.450068 1498704 cri.go:89] found id: ""
	I1217 02:06:32.450092 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.450101 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:32.450107 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:32.450176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:32.475101 1498704 cri.go:89] found id: ""
	I1217 02:06:32.475126 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.475135 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:32.475142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:32.475218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:32.500347 1498704 cri.go:89] found id: ""
	I1217 02:06:32.500372 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.500380 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:32.500387 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:32.500447 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:32.525315 1498704 cri.go:89] found id: ""
	I1217 02:06:32.525346 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.525355 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:32.525361 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:32.525440 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:32.550267 1498704 cri.go:89] found id: ""
	I1217 02:06:32.550341 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.550358 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:32.550365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:32.550424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:32.575413 1498704 cri.go:89] found id: ""
	I1217 02:06:32.575438 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.575447 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:32.575453 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:32.575559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:32.603477 1498704 cri.go:89] found id: ""
	I1217 02:06:32.603503 1498704 logs.go:282] 0 containers: []
	W1217 02:06:32.603513 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:32.603523 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:32.603568 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:32.669699 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:32.669735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:32.686097 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:32.686126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:32.755583 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:32.747406    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.747925    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.749539    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.750156    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:32.751709    2297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:32.755604 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:32.755616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:32.782146 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:32.782195 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:32.135482 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:34.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:33.698737 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:33.767478 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:33.767516 1498704 retry.go:31] will retry after 19.401613005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.214860 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:35.276710 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.276741 1498704 retry.go:31] will retry after 25.686831054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:35.310030 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:35.320395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:35.320472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:35.344503 1498704 cri.go:89] found id: ""
	I1217 02:06:35.344525 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.344533 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:35.344539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:35.344597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:35.375750 1498704 cri.go:89] found id: ""
	I1217 02:06:35.375773 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.375782 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:35.375788 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:35.375857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:35.403776 1498704 cri.go:89] found id: ""
	I1217 02:06:35.403803 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.403813 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:35.403819 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:35.403878 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:35.437584 1498704 cri.go:89] found id: ""
	I1217 02:06:35.437608 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.437616 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:35.437623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:35.437723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:35.467173 1498704 cri.go:89] found id: ""
	I1217 02:06:35.467207 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.467216 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:35.467223 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:35.467289 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:35.491257 1498704 cri.go:89] found id: ""
	I1217 02:06:35.491284 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.491294 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:35.491301 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:35.491380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:35.515935 1498704 cri.go:89] found id: ""
	I1217 02:06:35.515961 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.515971 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:35.515978 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:35.516077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:35.542706 1498704 cri.go:89] found id: ""
	I1217 02:06:35.542730 1498704 logs.go:282] 0 containers: []
	W1217 02:06:35.542739 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:35.542748 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:35.542759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:35.601383 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:35.601428 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:35.616228 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:35.616269 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:35.693548 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:35.684794    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.685586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.687478    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.688000    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:35.689586    2419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:35.693569 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:35.693584 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:35.719247 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:35.719286 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:36.635304 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:39.135165 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:41.135205 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:38.250028 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:38.261967 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:38.262037 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:38.286400 1498704 cri.go:89] found id: ""
	I1217 02:06:38.286423 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.286431 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:38.286437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:38.286499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:38.310618 1498704 cri.go:89] found id: ""
	I1217 02:06:38.310639 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.310647 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:38.310654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:38.310713 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:38.335110 1498704 cri.go:89] found id: ""
	I1217 02:06:38.335136 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.335144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:38.335151 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:38.335214 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:38.364179 1498704 cri.go:89] found id: ""
	I1217 02:06:38.364202 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.364211 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:38.364218 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:38.364278 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:38.402338 1498704 cri.go:89] found id: ""
	I1217 02:06:38.402366 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.402374 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:38.402384 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:38.402443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:38.433053 1498704 cri.go:89] found id: ""
	I1217 02:06:38.433081 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.433090 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:38.433096 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:38.433155 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:38.461635 1498704 cri.go:89] found id: ""
	I1217 02:06:38.461688 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.461698 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:38.461704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:38.461767 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:38.486774 1498704 cri.go:89] found id: ""
	I1217 02:06:38.486798 1498704 logs.go:282] 0 containers: []
	W1217 02:06:38.486807 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:38.486816 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:38.486827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:38.543417 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:38.543453 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:38.558472 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:38.558499 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:38.627234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:38.617000    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618012    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.618668    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620016    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:38.620787    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:38.627308 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:38.627336 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:38.656399 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:38.656481 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:41.188669 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:41.199463 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:41.199550 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:41.223737 1498704 cri.go:89] found id: ""
	I1217 02:06:41.223762 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.223771 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:41.223778 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:41.223842 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:41.248972 1498704 cri.go:89] found id: ""
	I1217 02:06:41.248998 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.249014 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:41.249022 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:41.249084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:41.274840 1498704 cri.go:89] found id: ""
	I1217 02:06:41.274873 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.274886 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:41.274892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:41.274965 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:41.302162 1498704 cri.go:89] found id: ""
	I1217 02:06:41.302188 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.302197 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:41.302204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:41.302274 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:41.331745 1498704 cri.go:89] found id: ""
	I1217 02:06:41.331771 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.331780 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:41.331786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:41.331872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:41.366507 1498704 cri.go:89] found id: ""
	I1217 02:06:41.366538 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.366559 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:41.366567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:41.366642 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:41.402343 1498704 cri.go:89] found id: ""
	I1217 02:06:41.402390 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.402400 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:41.402409 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:41.402482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:41.442142 1498704 cri.go:89] found id: ""
	I1217 02:06:41.442169 1498704 logs.go:282] 0 containers: []
	W1217 02:06:41.442177 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:41.442187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:41.442198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:41.498349 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:41.498432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:41.514261 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:41.514287 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:41.577450 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:41.569820    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.570197    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571675    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.571979    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:41.573406    2647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:41.577470 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:41.577483 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:41.602731 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:41.602766 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:43.635083 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:45.635371 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:44.138863 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:44.149308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:44.149424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:44.175006 1498704 cri.go:89] found id: ""
	I1217 02:06:44.175031 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.175040 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:44.175047 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:44.175103 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:44.199571 1498704 cri.go:89] found id: ""
	I1217 02:06:44.199596 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.199605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:44.199612 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:44.199669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:44.227289 1498704 cri.go:89] found id: ""
	I1217 02:06:44.227313 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.227323 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:44.227329 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:44.227418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:44.255509 1498704 cri.go:89] found id: ""
	I1217 02:06:44.255549 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.255558 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:44.255564 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:44.255622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:44.282827 1498704 cri.go:89] found id: ""
	I1217 02:06:44.282850 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.282858 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:44.282864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:44.282971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:44.310331 1498704 cri.go:89] found id: ""
	I1217 02:06:44.310354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.310363 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:44.310370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:44.310427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:44.334927 1498704 cri.go:89] found id: ""
	I1217 02:06:44.334952 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.334961 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:44.334968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:44.335068 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:44.359119 1498704 cri.go:89] found id: ""
	I1217 02:06:44.359144 1498704 logs.go:282] 0 containers: []
	W1217 02:06:44.359153 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:44.359162 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:44.359192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:44.436966 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:44.428269    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.429230    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.430883    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.431196    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:44.432712    2746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:44.436987 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:44.437000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:44.462649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:44.462686 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:44.492091 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:44.492120 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:44.548670 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:44.548707 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.063448 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:47.073962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:47.074076 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:47.100530 1498704 cri.go:89] found id: ""
	I1217 02:06:47.100565 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.100574 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:47.100580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:47.100656 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:47.126541 1498704 cri.go:89] found id: ""
	I1217 02:06:47.126573 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.126582 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:47.126589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:47.126657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:47.155783 1498704 cri.go:89] found id: ""
	I1217 02:06:47.155807 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.155816 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:47.155822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:47.155887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:47.183519 1498704 cri.go:89] found id: ""
	I1217 02:06:47.183547 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.183556 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:47.183562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:47.183640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:47.207004 1498704 cri.go:89] found id: ""
	I1217 02:06:47.207029 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.207038 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:47.207044 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:47.207107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:47.236132 1498704 cri.go:89] found id: ""
	I1217 02:06:47.236157 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.236166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:47.236173 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:47.236237 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:47.262428 1498704 cri.go:89] found id: ""
	I1217 02:06:47.262452 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.262460 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:47.262470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:47.262526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:47.291039 1498704 cri.go:89] found id: ""
	I1217 02:06:47.291113 1498704 logs.go:282] 0 containers: []
	W1217 02:06:47.291127 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:47.291137 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:47.291154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:47.348423 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:47.348457 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:47.362973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:47.363001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:47.446529 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:47.438106    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.438833    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440410    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.440890    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:47.442358    2867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:47.446602 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:47.446619 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:47.471848 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:47.471885 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:06:48.135178 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:50.635159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:50.002430 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:50.016670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:50.016759 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:50.048092 1498704 cri.go:89] found id: ""
	I1217 02:06:50.048116 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.048126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:50.048132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:50.048193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:50.077981 1498704 cri.go:89] found id: ""
	I1217 02:06:50.078006 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.078016 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:50.078023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:50.078084 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:50.104799 1498704 cri.go:89] found id: ""
	I1217 02:06:50.104824 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.104833 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:50.104839 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:50.104899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:50.134987 1498704 cri.go:89] found id: ""
	I1217 02:06:50.135010 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.135019 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:50.135025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:50.135088 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:50.163663 1498704 cri.go:89] found id: ""
	I1217 02:06:50.163689 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.163698 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:50.163704 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:50.163771 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:50.189331 1498704 cri.go:89] found id: ""
	I1217 02:06:50.189354 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.189362 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:50.189369 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:50.189435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:50.214491 1498704 cri.go:89] found id: ""
	I1217 02:06:50.214516 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.214525 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:50.214531 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:50.214590 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:50.238415 1498704 cri.go:89] found id: ""
	I1217 02:06:50.238442 1498704 logs.go:282] 0 containers: []
	W1217 02:06:50.238451 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:50.238460 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:50.238472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:50.269776 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:50.269804 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:50.327018 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:50.327055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:50.341848 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:50.341876 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:50.424429 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:50.413437    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.414378    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.415990    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.416331    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:50.417849    2989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:50.424452 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:50.424466 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:06:52.635229 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:54.635273 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:06:52.954006 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:52.964727 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:52.964802 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:52.989789 1498704 cri.go:89] found id: ""
	I1217 02:06:52.989810 1498704 logs.go:282] 0 containers: []
	W1217 02:06:52.989819 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:52.989826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:52.989887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:53.015439 1498704 cri.go:89] found id: ""
	I1217 02:06:53.015467 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.015476 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:53.015482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:53.015592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:53.040841 1498704 cri.go:89] found id: ""
	I1217 02:06:53.040865 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.040875 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:53.040882 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:53.040942 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:53.066349 1498704 cri.go:89] found id: ""
	I1217 02:06:53.066374 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.066383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:53.066389 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:53.066451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:53.091390 1498704 cri.go:89] found id: ""
	I1217 02:06:53.091415 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.091424 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:53.091430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:53.091490 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:53.117556 1498704 cri.go:89] found id: ""
	I1217 02:06:53.117581 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.117590 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:53.117597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:53.117683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:53.142385 1498704 cri.go:89] found id: ""
	I1217 02:06:53.142411 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.142421 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:53.142428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:53.142487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:53.167326 1498704 cri.go:89] found id: ""
	I1217 02:06:53.167351 1498704 logs.go:282] 0 containers: []
	W1217 02:06:53.167360 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:53.167370 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:53.167410 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:53.169580 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:06:53.227048 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:53.227133 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:06:53.263335 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:53.263474 1498704 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:06:53.263485 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:53.263548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:53.331925 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:53.323641    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.324423    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326097    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.326717    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:53.327921    3094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:53.331956 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:53.331970 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:53.358423 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:53.358461 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:55.889770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:55.902670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:55.902755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:55.931695 1498704 cri.go:89] found id: ""
	I1217 02:06:55.931717 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.931726 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:55.931732 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:55.931792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:55.957876 1498704 cri.go:89] found id: ""
	I1217 02:06:55.957898 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.957906 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:55.957913 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:55.957971 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:55.985470 1498704 cri.go:89] found id: ""
	I1217 02:06:55.985494 1498704 logs.go:282] 0 containers: []
	W1217 02:06:55.985503 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:55.985510 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:55.985569 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:56.012853 1498704 cri.go:89] found id: ""
	I1217 02:06:56.012876 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.012885 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:56.012892 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:56.012953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:56.038869 1498704 cri.go:89] found id: ""
	I1217 02:06:56.038896 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.038906 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:56.038912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:56.038974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:56.063896 1498704 cri.go:89] found id: ""
	I1217 02:06:56.063922 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.063931 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:56.063938 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:56.063998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:56.094167 1498704 cri.go:89] found id: ""
	I1217 02:06:56.094194 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.094202 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:56.094209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:56.094317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:56.119180 1498704 cri.go:89] found id: ""
	I1217 02:06:56.119203 1498704 logs.go:282] 0 containers: []
	W1217 02:06:56.119211 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:56.119220 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:56.119233 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:56.145717 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:56.145755 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:56.174733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:56.174764 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:56.231996 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:56.232031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:56.246270 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:56.246298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:56.310523 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:56.302748    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.303468    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.304652    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.305155    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:56.306670    3218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:58.810773 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:06:58.820984 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:06:58.821052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:06:58.844690 1498704 cri.go:89] found id: ""
	I1217 02:06:58.844713 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.844723 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:06:58.844729 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:06:58.844789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:06:58.869040 1498704 cri.go:89] found id: ""
	I1217 02:06:58.869065 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.869074 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:06:58.869081 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:06:58.869141 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:06:58.897937 1498704 cri.go:89] found id: ""
	I1217 02:06:58.897965 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.897974 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:06:58.897981 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:06:58.898046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:06:58.936181 1498704 cri.go:89] found id: ""
	I1217 02:06:58.936206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.936216 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:06:58.936222 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:06:58.936284 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:06:58.961870 1498704 cri.go:89] found id: ""
	I1217 02:06:58.961894 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.961902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:06:58.961908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:06:58.961973 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:06:58.987453 1498704 cri.go:89] found id: ""
	I1217 02:06:58.987476 1498704 logs.go:282] 0 containers: []
	W1217 02:06:58.987485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:06:58.987492 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:06:58.987589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:06:59.014256 1498704 cri.go:89] found id: ""
	I1217 02:06:59.014281 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.014290 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:06:59.014296 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:06:59.014356 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:06:59.043181 1498704 cri.go:89] found id: ""
	I1217 02:06:59.043206 1498704 logs.go:282] 0 containers: []
	W1217 02:06:59.043214 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:06:59.043224 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:06:59.043265 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:06:59.069988 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:06:59.070014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:06:59.126583 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:06:59.126616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:06:59.143769 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:06:59.143858 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:06:59.206336 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:06:59.198243    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.198884    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.200600    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.201133    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:06:59.202609    3330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:06:59.206357 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:06:59.206368 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:06:59.467894 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:59.526704 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.526801 1498704 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:00.964501 1498704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:01.024877 1498704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:01.024990 1498704 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:01.030055 1498704 out.go:179] * Enabled addons: 
	W1217 02:06:57.134604 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:06:59.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:01.032983 1498704 addons.go:530] duration metric: took 1m40.577449503s for enable addons: enabled=[]
	I1217 02:07:01.732628 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:01.743041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:01.743116 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:01.767462 1498704 cri.go:89] found id: ""
	I1217 02:07:01.767488 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.767497 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:01.767503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:01.767602 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:01.793082 1498704 cri.go:89] found id: ""
	I1217 02:07:01.793104 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.793112 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:01.793119 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:01.793179 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:01.819716 1498704 cri.go:89] found id: ""
	I1217 02:07:01.819740 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.819749 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:01.819755 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:01.819815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:01.847485 1498704 cri.go:89] found id: ""
	I1217 02:07:01.847556 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.847572 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:01.847580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:01.847641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:01.875985 1498704 cri.go:89] found id: ""
	I1217 02:07:01.876062 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.876084 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:01.876103 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:01.876193 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:01.910714 1498704 cri.go:89] found id: ""
	I1217 02:07:01.910739 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.910748 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:01.910754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:01.910813 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:01.937846 1498704 cri.go:89] found id: ""
	I1217 02:07:01.937871 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.937880 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:01.937886 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:01.937945 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:01.964067 1498704 cri.go:89] found id: ""
	I1217 02:07:01.964091 1498704 logs.go:282] 0 containers: []
	W1217 02:07:01.964100 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:01.964114 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:01.964126 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:02.028700 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:02.020546    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.021140    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.022972    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.023596    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:02.024620    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:02.028724 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:02.028739 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:02.054141 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:02.054180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:02.082544 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:02.082570 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:02.139516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:02.139555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:07:01.635378 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:04.134753 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:06.135163 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:04.654404 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:04.665750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:04.665823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:04.692548 1498704 cri.go:89] found id: ""
	I1217 02:07:04.692573 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.692582 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:04.692589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:04.692649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:04.716945 1498704 cri.go:89] found id: ""
	I1217 02:07:04.716971 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.716980 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:04.716986 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:04.717050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:04.741853 1498704 cri.go:89] found id: ""
	I1217 02:07:04.741919 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.741943 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:04.741956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:04.742029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:04.766368 1498704 cri.go:89] found id: ""
	I1217 02:07:04.766432 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.766456 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:04.766471 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:04.766543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:04.791787 1498704 cri.go:89] found id: ""
	I1217 02:07:04.791811 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.791819 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:04.791826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:04.791886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:04.817229 1498704 cri.go:89] found id: ""
	I1217 02:07:04.817255 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.817264 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:04.817271 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:04.817343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:04.841915 1498704 cri.go:89] found id: ""
	I1217 02:07:04.841938 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.841947 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:04.841953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:04.842013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:04.866862 1498704 cri.go:89] found id: ""
	I1217 02:07:04.866889 1498704 logs.go:282] 0 containers: []
	W1217 02:07:04.866898 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:04.866908 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:04.866920 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:04.930507 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:04.930554 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:04.948025 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:04.948060 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:05.019651 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:05.010407    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.011133    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.012825    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.013342    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:05.015124    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:05.019675 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:05.019688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:05.046001 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:05.046036 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.578495 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:07.591153 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:07.591225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:07.621427 1498704 cri.go:89] found id: ""
	I1217 02:07:07.621450 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.621459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:07.621466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:07.621526 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:07.661892 1498704 cri.go:89] found id: ""
	I1217 02:07:07.661915 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.661923 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:07.661929 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:07.661995 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:07.695665 1498704 cri.go:89] found id: ""
	I1217 02:07:07.695693 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.695703 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:07.695709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:07.695775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:07.721278 1498704 cri.go:89] found id: ""
	I1217 02:07:07.721308 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.721316 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:07.721323 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:07.721381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:07.745368 1498704 cri.go:89] found id: ""
	I1217 02:07:07.745396 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.745404 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:07.745411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:07.745469 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:07.773994 1498704 cri.go:89] found id: ""
	I1217 02:07:07.774017 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.774025 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:07.774032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:07.774094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:07.799025 1498704 cri.go:89] found id: ""
	I1217 02:07:07.799049 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.799058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:07.799070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:07.799128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:07.824235 1498704 cri.go:89] found id: ""
	I1217 02:07:07.824261 1498704 logs.go:282] 0 containers: []
	W1217 02:07:07.824270 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:07.824278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:07.824290 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:07.839101 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:07.839129 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:08.135245 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:10.635146 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:07.923334 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:07.907068    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.913860    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.914502    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916142    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:07.916637    3671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:07.923360 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:07.923372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:07.949715 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:07.949754 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:07.977665 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:07.977690 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.537062 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:10.547797 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:10.547872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:10.572434 1498704 cri.go:89] found id: ""
	I1217 02:07:10.572462 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.572472 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:10.572479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:10.572560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:10.597486 1498704 cri.go:89] found id: ""
	I1217 02:07:10.597510 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.597519 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:10.597525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:10.597591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:10.627205 1498704 cri.go:89] found id: ""
	I1217 02:07:10.627227 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.627236 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:10.627241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:10.627316 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:10.661788 1498704 cri.go:89] found id: ""
	I1217 02:07:10.661815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.661825 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:10.661832 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:10.661892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:10.694378 1498704 cri.go:89] found id: ""
	I1217 02:07:10.694403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.694411 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:10.694417 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:10.694481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:10.719732 1498704 cri.go:89] found id: ""
	I1217 02:07:10.719759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.719768 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:10.719775 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:10.719834 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:10.746071 1498704 cri.go:89] found id: ""
	I1217 02:07:10.746141 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.746169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:10.746181 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:10.746257 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:10.771251 1498704 cri.go:89] found id: ""
	I1217 02:07:10.771324 1498704 logs.go:282] 0 containers: []
	W1217 02:07:10.771339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:10.771349 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:10.771363 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:10.797277 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:10.797316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:10.824227 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:10.824255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:10.883648 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:10.883685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:10.899500 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:10.899545 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:10.971848 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:10.964210    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.964861    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.965875    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.966305    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:10.967767    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:13.135257 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:15.635347 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:13.472155 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:13.482654 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:13.482730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:13.511840 1498704 cri.go:89] found id: ""
	I1217 02:07:13.511865 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.511874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:13.511880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:13.511938 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:13.539314 1498704 cri.go:89] found id: ""
	I1217 02:07:13.539340 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.539349 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:13.539355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:13.539418 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:13.564523 1498704 cri.go:89] found id: ""
	I1217 02:07:13.564595 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.564616 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:13.564635 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:13.564722 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:13.588672 1498704 cri.go:89] found id: ""
	I1217 02:07:13.588696 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.588705 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:13.588711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:13.588769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:13.613292 1498704 cri.go:89] found id: ""
	I1217 02:07:13.613370 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.613394 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:13.613413 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:13.613497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:13.640379 1498704 cri.go:89] found id: ""
	I1217 02:07:13.640401 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.640467 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:13.640475 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:13.640596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:13.670823 1498704 cri.go:89] found id: ""
	I1217 02:07:13.670897 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.670909 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:13.670915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:13.671033 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:13.697928 1498704 cri.go:89] found id: ""
	I1217 02:07:13.697954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:13.697963 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:13.697973 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:13.697991 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:13.764081 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:13.754796    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.755478    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757201    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.757841    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:13.759446    3889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:13.764103 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:13.764117 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:13.789698 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:13.789735 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:13.817458 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:13.817528 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:13.873570 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:13.873604 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.390490 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:16.400824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:16.400892 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:16.433284 1498704 cri.go:89] found id: ""
	I1217 02:07:16.433306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.433315 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:16.433321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:16.433382 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:16.459029 1498704 cri.go:89] found id: ""
	I1217 02:07:16.459051 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.459059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:16.459065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:16.459123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:16.482532 1498704 cri.go:89] found id: ""
	I1217 02:07:16.482559 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.482568 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:16.482574 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:16.482635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:16.508099 1498704 cri.go:89] found id: ""
	I1217 02:07:16.508126 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.508135 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:16.508141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:16.508198 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:16.537293 1498704 cri.go:89] found id: ""
	I1217 02:07:16.537327 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.537336 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:16.537343 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:16.537422 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:16.561736 1498704 cri.go:89] found id: ""
	I1217 02:07:16.561761 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.561769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:16.561776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:16.561841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:16.588020 1498704 cri.go:89] found id: ""
	I1217 02:07:16.588054 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.588063 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:16.588069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:16.588136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:16.614951 1498704 cri.go:89] found id: ""
	I1217 02:07:16.614983 1498704 logs.go:282] 0 containers: []
	W1217 02:07:16.614993 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:16.615018 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:16.615035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:16.674706 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:16.674738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:16.693871 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:16.694008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:16.761779 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:16.753582    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.754184    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.755686    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.756107    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:16.757692    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:16.761800 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:16.761813 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:16.788228 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:16.788270 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:18.135158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:20.135199 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:19.320399 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:19.330773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:19.330845 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:19.354921 1498704 cri.go:89] found id: ""
	I1217 02:07:19.354990 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.355015 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:19.355028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:19.355100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:19.383572 1498704 cri.go:89] found id: ""
	I1217 02:07:19.383648 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.383662 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:19.383670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:19.383735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:19.412179 1498704 cri.go:89] found id: ""
	I1217 02:07:19.412204 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.412213 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:19.412229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:19.412290 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:19.437924 1498704 cri.go:89] found id: ""
	I1217 02:07:19.437950 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.437959 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:19.437966 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:19.438057 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:19.462416 1498704 cri.go:89] found id: ""
	I1217 02:07:19.462483 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.462507 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:19.462528 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:19.462618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:19.486955 1498704 cri.go:89] found id: ""
	I1217 02:07:19.487022 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.487047 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:19.487061 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:19.487133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:19.517143 1498704 cri.go:89] found id: ""
	I1217 02:07:19.517170 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.517178 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:19.517185 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:19.517245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:19.541419 1498704 cri.go:89] found id: ""
	I1217 02:07:19.541443 1498704 logs.go:282] 0 containers: []
	W1217 02:07:19.541452 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:19.541462 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:19.541474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:19.600586 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:19.600621 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:19.615645 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:19.615673 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:19.700496 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:19.692408    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.693050    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694298    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.694651    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:19.696104    4114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:19.700518 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:19.700531 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:19.725860 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:19.725896 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:22.254753 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:22.266831 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:22.266902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:22.291227 1498704 cri.go:89] found id: ""
	I1217 02:07:22.291306 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.291329 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:22.291344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:22.291421 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:22.317812 1498704 cri.go:89] found id: ""
	I1217 02:07:22.317835 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.317844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:22.317850 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:22.317929 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:22.341950 1498704 cri.go:89] found id: ""
	I1217 02:07:22.341973 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.341982 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:22.341991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:22.342074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:22.368217 1498704 cri.go:89] found id: ""
	I1217 02:07:22.368291 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.368330 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:22.368350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:22.368435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:22.396888 1498704 cri.go:89] found id: ""
	I1217 02:07:22.396911 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.396920 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:22.396926 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:22.396987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:22.420964 1498704 cri.go:89] found id: ""
	I1217 02:07:22.421040 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.421064 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:22.421083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:22.421163 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:22.446890 1498704 cri.go:89] found id: ""
	I1217 02:07:22.446954 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.446980 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:22.447002 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:22.447067 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:22.475922 1498704 cri.go:89] found id: ""
	I1217 02:07:22.475949 1498704 logs.go:282] 0 containers: []
	W1217 02:07:22.475959 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:22.475968 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:22.475980 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:22.532457 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:22.532490 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:22.546823 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:22.546900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:22.612059 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:22.604218    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.604911    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606424    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.606737    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:22.608203    4225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:22.612089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:22.612102 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:22.642268 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:22.642325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:22.635112 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:25.134718 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:25.182933 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:25.194033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:25.194115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:25.218403 1498704 cri.go:89] found id: ""
	I1217 02:07:25.218426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.218434 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:25.218441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:25.218500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:25.247233 1498704 cri.go:89] found id: ""
	I1217 02:07:25.247257 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.247267 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:25.247272 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:25.247337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:25.271255 1498704 cri.go:89] found id: ""
	I1217 02:07:25.271278 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.271286 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:25.271292 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:25.271354 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:25.295129 1498704 cri.go:89] found id: ""
	I1217 02:07:25.295152 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.295161 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:25.295167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:25.295232 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:25.323735 1498704 cri.go:89] found id: ""
	I1217 02:07:25.323802 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.323818 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:25.323826 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:25.323895 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:25.348083 1498704 cri.go:89] found id: ""
	I1217 02:07:25.348107 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.348116 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:25.348123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:25.348187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:25.375945 1498704 cri.go:89] found id: ""
	I1217 02:07:25.375967 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.375976 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:25.375982 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:25.376046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:25.404167 1498704 cri.go:89] found id: ""
	I1217 02:07:25.404190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:25.404199 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:25.404207 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:25.404219 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:25.432830 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:25.432905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:25.491437 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:25.491472 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:25.506773 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:25.506811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:25.571857 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:25.563411    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.564290    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566145    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.566486    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:25.567944    4351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:25.571879 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:25.571891 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 02:07:27.634506 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:29.635139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:28.097148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:28.109420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:28.109492 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:28.147274 1498704 cri.go:89] found id: ""
	I1217 02:07:28.147301 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.147310 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:28.147317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:28.147375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:28.182487 1498704 cri.go:89] found id: ""
	I1217 02:07:28.182520 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.182529 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:28.182535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:28.182605 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:28.210414 1498704 cri.go:89] found id: ""
	I1217 02:07:28.210492 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.210506 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:28.210513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:28.210596 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:28.236032 1498704 cri.go:89] found id: ""
	I1217 02:07:28.236067 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.236076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:28.236100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:28.236187 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:28.261848 1498704 cri.go:89] found id: ""
	I1217 02:07:28.261925 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.261949 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:28.261961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:28.262023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:28.287575 1498704 cri.go:89] found id: ""
	I1217 02:07:28.287642 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.287667 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:28.287681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:28.287753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:28.311909 1498704 cri.go:89] found id: ""
	I1217 02:07:28.311942 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.311950 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:28.311974 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:28.312055 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:28.338978 1498704 cri.go:89] found id: ""
	I1217 02:07:28.338999 1498704 logs.go:282] 0 containers: []
	W1217 02:07:28.339013 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:28.339041 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:28.339059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:28.395245 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:28.395283 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:28.410155 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:28.410183 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:28.473762 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:28.465176    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.465695    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467313    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.467841    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:28.469624    4452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:28.473783 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:28.473807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:28.499695 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:28.499728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.034443 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:31.045062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:31.045138 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:31.071798 1498704 cri.go:89] found id: ""
	I1217 02:07:31.071825 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.071835 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:31.071842 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:31.071912 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:31.102760 1498704 cri.go:89] found id: ""
	I1217 02:07:31.102787 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.102795 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:31.102802 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:31.102866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:31.141278 1498704 cri.go:89] found id: ""
	I1217 02:07:31.141303 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.141313 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:31.141320 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:31.141385 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:31.171560 1498704 cri.go:89] found id: ""
	I1217 02:07:31.171590 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.171599 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:31.171606 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:31.171671 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:31.198647 1498704 cri.go:89] found id: ""
	I1217 02:07:31.198713 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.198736 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:31.198749 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:31.198822 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:31.223451 1498704 cri.go:89] found id: ""
	I1217 02:07:31.223534 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.223560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:31.223580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:31.223660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:31.253387 1498704 cri.go:89] found id: ""
	I1217 02:07:31.253413 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.253422 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:31.253428 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:31.253487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:31.278792 1498704 cri.go:89] found id: ""
	I1217 02:07:31.278815 1498704 logs.go:282] 0 containers: []
	W1217 02:07:31.278823 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:31.278832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:31.278843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:31.303758 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:31.303790 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:31.332180 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:31.332251 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:31.388186 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:31.388222 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:31.402632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:31.402661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:31.464007 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:31.455376    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456162    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.456959    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458412    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:31.458952    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:07:32.134594 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:34.135393 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:33.964236 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:33.974724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:33.974801 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:33.997812 1498704 cri.go:89] found id: ""
	I1217 02:07:33.997833 1498704 logs.go:282] 0 containers: []
	W1217 02:07:33.997841 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:33.997847 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:33.997918 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:34.028229 1498704 cri.go:89] found id: ""
	I1217 02:07:34.028256 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.028265 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:34.028273 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:34.028333 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:34.053400 1498704 cri.go:89] found id: ""
	I1217 02:07:34.053426 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.053437 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:34.053444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:34.053504 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:34.079351 1498704 cri.go:89] found id: ""
	I1217 02:07:34.079419 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.079433 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:34.079441 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:34.079499 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:34.106192 1498704 cri.go:89] found id: ""
	I1217 02:07:34.106228 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.106237 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:34.106244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:34.106315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:34.147697 1498704 cri.go:89] found id: ""
	I1217 02:07:34.147759 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.147785 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:34.147810 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:34.147890 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:34.176177 1498704 cri.go:89] found id: ""
	I1217 02:07:34.176244 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.176268 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:34.176288 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:34.176365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:34.205945 1498704 cri.go:89] found id: ""
	I1217 02:07:34.206007 1498704 logs.go:282] 0 containers: []
	W1217 02:07:34.206035 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:34.206056 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:34.206081 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:34.262276 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:34.262309 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:34.276944 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:34.276971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:34.338908 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:34.331218    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.331638    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333081    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.333377    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:34.334783    4680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:34.338934 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:34.338947 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:34.363617 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:34.363647 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:36.891296 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:36.902860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:36.902927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:36.930707 1498704 cri.go:89] found id: ""
	I1217 02:07:36.930733 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.930747 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:36.930754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:36.930811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:36.955573 1498704 cri.go:89] found id: ""
	I1217 02:07:36.955597 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.955605 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:36.955611 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:36.955668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:36.980409 1498704 cri.go:89] found id: ""
	I1217 02:07:36.980434 1498704 logs.go:282] 0 containers: []
	W1217 02:07:36.980444 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:36.980450 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:36.980508 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:37.009442 1498704 cri.go:89] found id: ""
	I1217 02:07:37.009467 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.009477 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:37.009484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:37.009551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:37.037149 1498704 cri.go:89] found id: ""
	I1217 02:07:37.037171 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.037180 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:37.037186 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:37.037250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:37.061767 1498704 cri.go:89] found id: ""
	I1217 02:07:37.061792 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.061801 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:37.061818 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:37.061889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:37.085968 1498704 cri.go:89] found id: ""
	I1217 02:07:37.085993 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.086003 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:37.086009 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:37.086074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:37.115273 1498704 cri.go:89] found id: ""
	I1217 02:07:37.115295 1498704 logs.go:282] 0 containers: []
	W1217 02:07:37.115303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:37.115312 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:37.115323 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:37.173190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:37.173223 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:37.190802 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:37.190834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:37.258464 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:37.250353    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.250978    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.252515    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.253019    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:37.254562    4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:37.258486 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:37.258498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:37.283631 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:37.283665 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:36.635067 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:38.635141 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:40.635215 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:39.816914 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:39.827386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:39.827463 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:39.852104 1498704 cri.go:89] found id: ""
	I1217 02:07:39.852129 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.852139 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:39.852145 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:39.852204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:39.892785 1498704 cri.go:89] found id: ""
	I1217 02:07:39.892806 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.892815 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:39.892822 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:39.892887 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:39.923500 1498704 cri.go:89] found id: ""
	I1217 02:07:39.923530 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.923538 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:39.923544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:39.923603 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:39.949968 1498704 cri.go:89] found id: ""
	I1217 02:07:39.949995 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.950004 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:39.950010 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:39.950071 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:39.974479 1498704 cri.go:89] found id: ""
	I1217 02:07:39.974500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:39.974508 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:39.974515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:39.974572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:40.015259 1498704 cri.go:89] found id: ""
	I1217 02:07:40.015286 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.015296 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:40.015303 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:40.015375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:40.045029 1498704 cri.go:89] found id: ""
	I1217 02:07:40.045055 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.045064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:40.045071 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:40.045135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:40.072784 1498704 cri.go:89] found id: ""
	I1217 02:07:40.072818 1498704 logs.go:282] 0 containers: []
	W1217 02:07:40.072833 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:40.072843 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:40.072860 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:40.153737 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:40.142795    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.144161    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.145378    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.146432    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:40.147502    4897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:40.153765 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:40.153780 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:40.189498 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:40.189552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:40.222768 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:40.222844 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:40.279190 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:40.279224 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:42.796231 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:42.806670 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:42.806738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:42.830230 1498704 cri.go:89] found id: ""
	I1217 02:07:42.830250 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.830258 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:42.830265 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:42.830323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1217 02:07:43.135159 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:45.135226 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:42.855478 1498704 cri.go:89] found id: ""
	I1217 02:07:42.855500 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.855509 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:42.855515 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:42.855580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:42.894494 1498704 cri.go:89] found id: ""
	I1217 02:07:42.894522 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.894530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:42.894536 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:42.894593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:42.921324 1498704 cri.go:89] found id: ""
	I1217 02:07:42.921350 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.921359 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:42.921365 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:42.921435 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:42.953266 1498704 cri.go:89] found id: ""
	I1217 02:07:42.953290 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.953299 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:42.953305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:42.953366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:42.977816 1498704 cri.go:89] found id: ""
	I1217 02:07:42.977841 1498704 logs.go:282] 0 containers: []
	W1217 02:07:42.977850 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:42.977856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:42.977917 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:43.003747 1498704 cri.go:89] found id: ""
	I1217 02:07:43.003839 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.003865 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:43.003880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:43.003963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:43.029772 1498704 cri.go:89] found id: ""
	I1217 02:07:43.029797 1498704 logs.go:282] 0 containers: []
	W1217 02:07:43.029806 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:43.029816 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:43.029828 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:43.055443 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:43.055476 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:43.084076 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:43.084104 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:43.145546 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:43.145607 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:43.161920 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:43.161999 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:43.231831 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:43.222961    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.223493    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225230    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.225634    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:43.227364    5030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:45.733506 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:45.744340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:45.744408 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:45.769934 1498704 cri.go:89] found id: ""
	I1217 02:07:45.769957 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.769965 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:45.769971 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:45.770034 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:45.795238 1498704 cri.go:89] found id: ""
	I1217 02:07:45.795263 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.795272 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:45.795279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:45.795343 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:45.821898 1498704 cri.go:89] found id: ""
	I1217 02:07:45.821922 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.821930 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:45.821937 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:45.821999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:45.847109 1498704 cri.go:89] found id: ""
	I1217 02:07:45.847132 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.847140 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:45.847146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:45.847208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:45.880160 1498704 cri.go:89] found id: ""
	I1217 02:07:45.880190 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.880199 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:45.880205 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:45.880271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:45.910818 1498704 cri.go:89] found id: ""
	I1217 02:07:45.910850 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.910859 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:45.910866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:45.910927 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:45.939378 1498704 cri.go:89] found id: ""
	I1217 02:07:45.939403 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.939413 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:45.939419 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:45.939480 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:45.966395 1498704 cri.go:89] found id: ""
	I1217 02:07:45.966421 1498704 logs.go:282] 0 containers: []
	W1217 02:07:45.966430 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:45.966440 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:45.966479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:45.981177 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:45.981203 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:46.055154 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:46.045816    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.046563    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.048453    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.049038    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:46.050565    5130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:46.055186 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:46.055204 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:46.081781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:46.081822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:46.110247 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:46.110271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:07:47.635175 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:50.134634 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:48.673749 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:48.684117 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:48.684190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:48.710141 1498704 cri.go:89] found id: ""
	I1217 02:07:48.710163 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.710171 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:48.710177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:48.710242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:48.735609 1498704 cri.go:89] found id: ""
	I1217 02:07:48.735631 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.735639 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:48.735648 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:48.735707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:48.760494 1498704 cri.go:89] found id: ""
	I1217 02:07:48.760517 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.760525 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:48.760532 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:48.760592 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:48.786553 1498704 cri.go:89] found id: ""
	I1217 02:07:48.786574 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.786582 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:48.786588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:48.786645 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:48.815529 1498704 cri.go:89] found id: ""
	I1217 02:07:48.815551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.815560 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:48.815566 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:48.815623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:48.839528 1498704 cri.go:89] found id: ""
	I1217 02:07:48.839551 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.839560 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:48.839567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:48.839649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:48.870240 1498704 cri.go:89] found id: ""
	I1217 02:07:48.870266 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.870275 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:48.870282 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:48.870363 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:48.906712 1498704 cri.go:89] found id: ""
	I1217 02:07:48.906736 1498704 logs.go:282] 0 containers: []
	W1217 02:07:48.906746 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:48.906756 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:48.906786 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:48.934786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:48.934865 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:48.964758 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:48.964785 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:49.022291 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:49.022326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:49.036990 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:49.037025 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:49.101921 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:49.093270    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.093786    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095214    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.095625    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:49.097015    5256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.602715 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.614088 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:51.614167 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:51.640614 1498704 cri.go:89] found id: ""
	I1217 02:07:51.640639 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.640648 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:51.640655 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:51.640716 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:51.665595 1498704 cri.go:89] found id: ""
	I1217 02:07:51.665622 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.665631 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:51.665637 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:51.665727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:51.690508 1498704 cri.go:89] found id: ""
	I1217 02:07:51.690532 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.690541 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:51.690547 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:51.690627 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:51.717537 1498704 cri.go:89] found id: ""
	I1217 02:07:51.717561 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.717570 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:51.717577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:51.717638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:51.742073 1498704 cri.go:89] found id: ""
	I1217 02:07:51.742095 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.742104 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:51.742110 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:51.742169 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:51.768165 1498704 cri.go:89] found id: ""
	I1217 02:07:51.768188 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.768234 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:51.768255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:51.768322 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:51.793095 1498704 cri.go:89] found id: ""
	I1217 02:07:51.793118 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.793127 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:51.793133 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:51.793195 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:51.817679 1498704 cri.go:89] found id: ""
	I1217 02:07:51.817701 1498704 logs.go:282] 0 containers: []
	W1217 02:07:51.817710 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:51.817720 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:51.817730 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:51.874453 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:51.874486 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:51.890393 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:51.890418 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:51.966182 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:51.958188    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.958611    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960237    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.960817    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:51.962352    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:51.966201 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:51.966214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:51.992382 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:51.992417 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:52.135139 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:54.135194 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:07:54.525060 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:54.535685 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:54.535760 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:54.563912 1498704 cri.go:89] found id: ""
	I1217 02:07:54.563935 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.563944 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:54.563950 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:54.564011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:54.588995 1498704 cri.go:89] found id: ""
	I1217 02:07:54.589020 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.589031 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:54.589038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:54.589101 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:54.615173 1498704 cri.go:89] found id: ""
	I1217 02:07:54.615198 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.615207 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:54.615214 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:54.615277 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:54.640498 1498704 cri.go:89] found id: ""
	I1217 02:07:54.640523 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.640532 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:54.640539 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:54.640623 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:54.666201 1498704 cri.go:89] found id: ""
	I1217 02:07:54.666226 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.666234 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:54.666241 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:54.666303 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:54.690876 1498704 cri.go:89] found id: ""
	I1217 02:07:54.690899 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.690908 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:54.690915 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:54.690974 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:54.714932 1498704 cri.go:89] found id: ""
	I1217 02:07:54.715000 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.715024 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:54.715043 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:54.715133 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:54.739880 1498704 cri.go:89] found id: ""
	I1217 02:07:54.739906 1498704 logs.go:282] 0 containers: []
	W1217 02:07:54.739926 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:54.739952 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:54.739978 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:54.804035 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:54.795583    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.796360    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798131    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.798692    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:54.800197    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:54.804056 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:54.804070 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:54.829994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:54.830030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:07:54.858611 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:54.858639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:54.921120 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:54.921196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.438546 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.448669 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:07:57.448736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:07:57.475324 1498704 cri.go:89] found id: ""
	I1217 02:07:57.475346 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.475355 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:07:57.475362 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:07:57.475419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:07:57.505098 1498704 cri.go:89] found id: ""
	I1217 02:07:57.505123 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.505131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:07:57.505137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:07:57.505196 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:07:57.529496 1498704 cri.go:89] found id: ""
	I1217 02:07:57.529519 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.529529 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:07:57.529535 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:07:57.529601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:07:57.560154 1498704 cri.go:89] found id: ""
	I1217 02:07:57.560179 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.560188 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:07:57.560194 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:07:57.560256 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:07:57.584872 1498704 cri.go:89] found id: ""
	I1217 02:07:57.584898 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.584912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:07:57.584919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:07:57.584976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:07:57.611897 1498704 cri.go:89] found id: ""
	I1217 02:07:57.611930 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.611938 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:07:57.611945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:07:57.612004 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:07:57.636969 1498704 cri.go:89] found id: ""
	I1217 02:07:57.636991 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.636999 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:07:57.637006 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:07:57.637069 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:07:57.661285 1498704 cri.go:89] found id: ""
	I1217 02:07:57.661312 1498704 logs.go:282] 0 containers: []
	W1217 02:07:57.661320 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:07:57.661329 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:07:57.661340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:07:57.717030 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:07:57.717066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:07:57.732556 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:07:57.732588 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:07:57.802383 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:07:57.794573    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.795225    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.796918    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.797389    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:57.798492    5575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:07:57.802403 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:07:57.802414 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:07:57.831640 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:07:57.831729 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:07:56.634914 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:07:58.635189 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:01.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:00.359786 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.375104 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:00.375194 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:00.418191 1498704 cri.go:89] found id: ""
	I1217 02:08:00.418222 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.418232 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:00.418239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:00.418315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:00.456739 1498704 cri.go:89] found id: ""
	I1217 02:08:00.456766 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.456775 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:00.456782 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:00.456850 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:00.488069 1498704 cri.go:89] found id: ""
	I1217 02:08:00.488097 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.488106 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:00.488115 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:00.488180 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:00.522338 1498704 cri.go:89] found id: ""
	I1217 02:08:00.522369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.522383 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:00.522391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:00.522477 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:00.552999 1498704 cri.go:89] found id: ""
	I1217 02:08:00.553026 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.553035 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:00.553041 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:00.553105 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:00.579678 1498704 cri.go:89] found id: ""
	I1217 02:08:00.579710 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.579719 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:00.579725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:00.579787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:00.605680 1498704 cri.go:89] found id: ""
	I1217 02:08:00.605708 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.605717 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:00.605724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:00.605787 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:00.632147 1498704 cri.go:89] found id: ""
	I1217 02:08:00.632172 1498704 logs.go:282] 0 containers: []
	W1217 02:08:00.632181 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:00.632191 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:00.632202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:00.658405 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:00.658442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:00.687017 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:00.687042 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:00.743960 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:00.743997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:00.758928 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:00.758957 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:00.826075 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:00.817208    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.817979    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.819744    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.820361    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:00.822094    5704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:08:03.634990 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:05.635168 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:03.326352 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.337106 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:03.337176 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:03.362079 1498704 cri.go:89] found id: ""
	I1217 02:08:03.362103 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.362112 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:03.362120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:03.362185 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:03.406055 1498704 cri.go:89] found id: ""
	I1217 02:08:03.406078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.406086 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:03.406092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:03.406153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:03.469689 1498704 cri.go:89] found id: ""
	I1217 02:08:03.469719 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.469728 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:03.469734 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:03.469795 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:03.495363 1498704 cri.go:89] found id: ""
	I1217 02:08:03.495388 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.495397 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:03.495403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:03.495462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:03.520987 1498704 cri.go:89] found id: ""
	I1217 02:08:03.521020 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.521029 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:03.521035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:03.521104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:03.546993 1498704 cri.go:89] found id: ""
	I1217 02:08:03.547070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.547086 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:03.547094 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:03.547157 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:03.572356 1498704 cri.go:89] found id: ""
	I1217 02:08:03.572381 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.572390 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:03.572396 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:03.572465 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:03.601007 1498704 cri.go:89] found id: ""
	I1217 02:08:03.601039 1498704 logs.go:282] 0 containers: []
	W1217 02:08:03.601048 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:03.601058 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:03.601069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:03.626163 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:03.626198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:03.653854 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:03.653882 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:03.711530 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:03.711566 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:03.726308 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:03.726377 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:03.794467 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:03.786046    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.786845    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788402    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.788685    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:03.790142    5817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.296166 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.306860 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:06.306931 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:06.335081 1498704 cri.go:89] found id: ""
	I1217 02:08:06.335118 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.335128 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:06.335140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:06.335216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:06.360315 1498704 cri.go:89] found id: ""
	I1217 02:08:06.360337 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.360346 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:06.360353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:06.360416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:06.438162 1498704 cri.go:89] found id: ""
	I1217 02:08:06.438184 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.438193 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:06.438201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:06.438260 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:06.473712 1498704 cri.go:89] found id: ""
	I1217 02:08:06.473739 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.473750 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:06.473757 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:06.473821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:06.501185 1498704 cri.go:89] found id: ""
	I1217 02:08:06.501213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.501223 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:06.501229 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:06.501291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:06.527618 1498704 cri.go:89] found id: ""
	I1217 02:08:06.527642 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.527650 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:06.527657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:06.527723 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:06.551855 1498704 cri.go:89] found id: ""
	I1217 02:08:06.551882 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.551892 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:06.551899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:06.551982 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:06.577516 1498704 cri.go:89] found id: ""
	I1217 02:08:06.577547 1498704 logs.go:282] 0 containers: []
	W1217 02:08:06.577556 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:06.577566 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:06.577577 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:06.592728 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:06.592762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:06.660537 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:06.652500    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.653062    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.654586    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.655108    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:06.656605    5914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:06.660559 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:06.660572 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:06.685272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:06.685307 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:06.716733 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:06.716761 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:07.635213 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:10.134640 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:09.274376 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.285055 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:09.285129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:09.310445 1498704 cri.go:89] found id: ""
	I1217 02:08:09.310468 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.310477 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:09.310483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:09.310551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:09.339399 1498704 cri.go:89] found id: ""
	I1217 02:08:09.339434 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.339443 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:09.339449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:09.339539 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:09.364792 1498704 cri.go:89] found id: ""
	I1217 02:08:09.364830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.364843 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:09.364851 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:09.364921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:09.398786 1498704 cri.go:89] found id: ""
	I1217 02:08:09.398813 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.398822 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:09.398829 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:09.398898 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:09.437605 1498704 cri.go:89] found id: ""
	I1217 02:08:09.437633 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.437670 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:09.437696 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:09.437778 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:09.469389 1498704 cri.go:89] found id: ""
	I1217 02:08:09.469430 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.469439 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:09.469446 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:09.469557 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:09.501822 1498704 cri.go:89] found id: ""
	I1217 02:08:09.501847 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.501856 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:09.501873 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:09.501953 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:09.526536 1498704 cri.go:89] found id: ""
	I1217 02:08:09.526604 1498704 logs.go:282] 0 containers: []
	W1217 02:08:09.526627 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:09.526649 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:09.526685 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:09.553800 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:09.553829 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:09.611333 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:09.611367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:09.626057 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:09.626083 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:09.690274 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:09.682123    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.682719    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684419    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.684916    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:09.686406    6041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:09.690296 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:09.690308 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.216656 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.226983 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:12.227094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:12.251590 1498704 cri.go:89] found id: ""
	I1217 02:08:12.251613 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.251622 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:12.251628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:12.251686 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:12.276257 1498704 cri.go:89] found id: ""
	I1217 02:08:12.276285 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.276293 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:12.276308 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:12.276365 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:12.300603 1498704 cri.go:89] found id: ""
	I1217 02:08:12.300628 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.300637 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:12.300643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:12.300704 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:12.328528 1498704 cri.go:89] found id: ""
	I1217 02:08:12.328552 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.328561 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:12.328571 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:12.328629 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:12.353931 1498704 cri.go:89] found id: ""
	I1217 02:08:12.353954 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.353963 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:12.353969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:12.354031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:12.426173 1498704 cri.go:89] found id: ""
	I1217 02:08:12.426238 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.426263 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:12.426283 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:12.426375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:12.463406 1498704 cri.go:89] found id: ""
	I1217 02:08:12.463432 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.463441 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:12.463447 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:12.463511 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:12.491432 1498704 cri.go:89] found id: ""
	I1217 02:08:12.491457 1498704 logs.go:282] 0 containers: []
	W1217 02:08:12.491466 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:12.491476 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:12.491487 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:12.549942 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:12.549979 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:12.566124 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:12.566160 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:12.632809 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:12.624956    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.625367    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.626971    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.627323    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:12.628997    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:12.632878 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:12.632899 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:12.657969 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:12.658007 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:12.635367 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:14.635409 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:15.189789 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.200614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:15.200684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:15.224844 1498704 cri.go:89] found id: ""
	I1217 02:08:15.224865 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.224874 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:15.224880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:15.224939 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:15.253351 1498704 cri.go:89] found id: ""
	I1217 02:08:15.253417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.253441 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:15.253459 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:15.253547 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:15.278140 1498704 cri.go:89] found id: ""
	I1217 02:08:15.278216 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.278238 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:15.278257 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:15.278335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:15.303296 1498704 cri.go:89] found id: ""
	I1217 02:08:15.303325 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.303334 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:15.303340 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:15.303399 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:15.332342 1498704 cri.go:89] found id: ""
	I1217 02:08:15.332369 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.332379 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:15.332386 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:15.332442 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:15.361393 1498704 cri.go:89] found id: ""
	I1217 02:08:15.361417 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.361426 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:15.361432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:15.361501 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:15.399309 1498704 cri.go:89] found id: ""
	I1217 02:08:15.399335 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.399343 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:15.399350 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:15.399409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:15.441743 1498704 cri.go:89] found id: ""
	I1217 02:08:15.441769 1498704 logs.go:282] 0 containers: []
	W1217 02:08:15.441778 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:15.441787 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:15.441799 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:15.508941 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:15.508977 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:15.524099 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:15.524127 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:15.595333 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:15.587382    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.588292    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.589845    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.590137    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:15.591669    6256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:15.595351 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:15.595367 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:15.620921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:15.620958 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:17.135481 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:19.635228 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:18.151199 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.162135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:18.162207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:18.190085 1498704 cri.go:89] found id: ""
	I1217 02:08:18.190108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.190116 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:18.190123 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:18.190186 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:18.218906 1498704 cri.go:89] found id: ""
	I1217 02:08:18.218930 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.218938 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:18.218944 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:18.219002 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:18.242454 1498704 cri.go:89] found id: ""
	I1217 02:08:18.242476 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.242484 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:18.242490 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:18.242549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:18.267483 1498704 cri.go:89] found id: ""
	I1217 02:08:18.267505 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.267514 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:18.267527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:18.267587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:18.291870 1498704 cri.go:89] found id: ""
	I1217 02:08:18.291894 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.291902 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:18.291909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:18.291970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:18.315514 1498704 cri.go:89] found id: ""
	I1217 02:08:18.315543 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.315551 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:18.315558 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:18.315617 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:18.338958 1498704 cri.go:89] found id: ""
	I1217 02:08:18.338980 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.338988 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:18.338995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:18.339052 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:18.362300 1498704 cri.go:89] found id: ""
	I1217 02:08:18.362326 1498704 logs.go:282] 0 containers: []
	W1217 02:08:18.362339 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:18.362349 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:18.362361 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:18.441796 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:18.441881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:18.465294 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:18.465318 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:18.527976 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:18.519744    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.520606    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522264    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.522601    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:18.524100    6371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:18.527999 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:18.528012 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:18.552941 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:18.552971 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:21.080554 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.090872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:21.090951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:21.119427 1498704 cri.go:89] found id: ""
	I1217 02:08:21.119451 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.119459 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:21.119466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:21.119531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:21.145488 1498704 cri.go:89] found id: ""
	I1217 02:08:21.145509 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.145517 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:21.145524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:21.145589 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:21.171795 1498704 cri.go:89] found id: ""
	I1217 02:08:21.171822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.171830 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:21.171837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:21.171897 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:21.200041 1498704 cri.go:89] found id: ""
	I1217 02:08:21.200067 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.200076 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:21.200083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:21.200144 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:21.224266 1498704 cri.go:89] found id: ""
	I1217 02:08:21.224294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.224302 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:21.224310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:21.224374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:21.249832 1498704 cri.go:89] found id: ""
	I1217 02:08:21.249859 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.249868 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:21.249875 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:21.249934 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:21.276533 1498704 cri.go:89] found id: ""
	I1217 02:08:21.276556 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.276565 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:21.276577 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:21.276638 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:21.302869 1498704 cri.go:89] found id: ""
	I1217 02:08:21.302898 1498704 logs.go:282] 0 containers: []
	W1217 02:08:21.302906 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:21.302920 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:21.302932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:21.359571 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:21.359612 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:21.386971 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:21.387000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:21.481485 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:21.472845    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.473772    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475499    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.475850    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:21.477350    6480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:21.481511 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:21.481523 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:21.510229 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:21.510266 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:22.134985 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:24.135180 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:26.135497 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:24.042457 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.053742 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:24.053815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:24.079751 1498704 cri.go:89] found id: ""
	I1217 02:08:24.079777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.079793 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:24.079801 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:24.079863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:24.106268 1498704 cri.go:89] found id: ""
	I1217 02:08:24.106294 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.106304 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:24.106310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:24.106372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:24.136105 1498704 cri.go:89] found id: ""
	I1217 02:08:24.136127 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.136141 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:24.136147 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:24.136208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:24.162676 1498704 cri.go:89] found id: ""
	I1217 02:08:24.162704 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.162713 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:24.162719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:24.162781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:24.186881 1498704 cri.go:89] found id: ""
	I1217 02:08:24.186909 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.186918 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:24.186924 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:24.186983 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:24.211784 1498704 cri.go:89] found id: ""
	I1217 02:08:24.211807 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.211816 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:24.211823 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:24.211883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:24.239768 1498704 cri.go:89] found id: ""
	I1217 02:08:24.239791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.239799 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:24.239806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:24.239863 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:24.267746 1498704 cri.go:89] found id: ""
	I1217 02:08:24.267826 1498704 logs.go:282] 0 containers: []
	W1217 02:08:24.267843 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:24.267853 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:24.267864 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:24.292626 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:24.292661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:24.324726 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:24.324756 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:24.386142 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:24.386184 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:24.417577 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:24.417605 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:24.496974 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:24.487773    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.488629    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490306    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.490864    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:24.492502    6610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:26.997267 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.015470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:27.015561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:27.041572 1498704 cri.go:89] found id: ""
	I1217 02:08:27.041593 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.041601 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:27.041608 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:27.041697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:27.067860 1498704 cri.go:89] found id: ""
	I1217 02:08:27.067884 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.067902 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:27.067923 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:27.068020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:27.091698 1498704 cri.go:89] found id: ""
	I1217 02:08:27.091722 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.091737 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:27.091744 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:27.091804 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:27.116923 1498704 cri.go:89] found id: ""
	I1217 02:08:27.116946 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.116954 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:27.116961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:27.117020 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:27.142595 1498704 cri.go:89] found id: ""
	I1217 02:08:27.142619 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.142628 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:27.142634 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:27.142693 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:27.167169 1498704 cri.go:89] found id: ""
	I1217 02:08:27.167195 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.167204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:27.167211 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:27.167271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:27.191350 1498704 cri.go:89] found id: ""
	I1217 02:08:27.191376 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.191384 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:27.191391 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:27.191451 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:27.216388 1498704 cri.go:89] found id: ""
	I1217 02:08:27.216413 1498704 logs.go:282] 0 containers: []
	W1217 02:08:27.216422 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:27.216431 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:27.216442 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:27.279861 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:27.271870    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.272650    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274216    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.274716    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:27.276170    6701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:27.279884 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:27.279900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:27.304990 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:27.305027 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:27.333926 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:27.333952 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:27.396365 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:27.396403 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 02:08:28.635158 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:30.635316 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:29.913629 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.924284 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:29.924359 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:29.951846 1498704 cri.go:89] found id: ""
	I1217 02:08:29.951873 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.951882 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:29.951888 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:29.951948 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:29.979680 1498704 cri.go:89] found id: ""
	I1217 02:08:29.979709 1498704 logs.go:282] 0 containers: []
	W1217 02:08:29.979718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:29.979724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:29.979783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:30.017361 1498704 cri.go:89] found id: ""
	I1217 02:08:30.017494 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.017508 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:30.017517 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:30.017600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:30.055966 1498704 cri.go:89] found id: ""
	I1217 02:08:30.055994 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.056008 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:30.056015 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:30.056153 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:30.086268 1498704 cri.go:89] found id: ""
	I1217 02:08:30.086296 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.086305 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:30.086313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:30.086387 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:30.114436 1498704 cri.go:89] found id: ""
	I1217 02:08:30.114474 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.114485 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:30.114493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:30.114563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:30.143104 1498704 cri.go:89] found id: ""
	I1217 02:08:30.143130 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.143140 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:30.143148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:30.143215 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:30.178848 1498704 cri.go:89] found id: ""
	I1217 02:08:30.178912 1498704 logs.go:282] 0 containers: []
	W1217 02:08:30.178928 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:30.178939 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:30.178950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:30.235226 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:30.235261 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:30.250400 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:30.250427 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:30.316823 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:30.308240    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.308888    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310382    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.310896    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:30.312541    6819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:30.316843 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:30.316855 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:30.341943 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:30.341985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:33.135099 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:35.135298 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:32.880177 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.891005 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:32.891073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:32.918870 1498704 cri.go:89] found id: ""
	I1217 02:08:32.918896 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.918905 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:32.918912 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:32.918970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:32.944098 1498704 cri.go:89] found id: ""
	I1217 02:08:32.944123 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.944132 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:32.944137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:32.944197 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:32.968767 1498704 cri.go:89] found id: ""
	I1217 02:08:32.968791 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.968801 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:32.968806 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:32.968864 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:32.992596 1498704 cri.go:89] found id: ""
	I1217 02:08:32.992624 1498704 logs.go:282] 0 containers: []
	W1217 02:08:32.992632 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:32.992638 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:32.992702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:33.018400 1498704 cri.go:89] found id: ""
	I1217 02:08:33.018424 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.018433 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:33.018439 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:33.018497 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:33.043622 1498704 cri.go:89] found id: ""
	I1217 02:08:33.043650 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.043660 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:33.043666 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:33.043728 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:33.068595 1498704 cri.go:89] found id: ""
	I1217 02:08:33.068617 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.068627 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:33.068633 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:33.068695 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:33.097084 1498704 cri.go:89] found id: ""
	I1217 02:08:33.097108 1498704 logs.go:282] 0 containers: []
	W1217 02:08:33.097117 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:33.097126 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:33.097137 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:33.122964 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:33.123001 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:33.151132 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:33.151159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:33.206768 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:33.206805 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:33.221251 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:33.221330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:33.289516 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:33.280741    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.281345    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283069    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.283615    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:33.285248    6942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:35.789806 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.800262 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:35.800330 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:35.824823 1498704 cri.go:89] found id: ""
	I1217 02:08:35.824844 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.824852 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:35.824859 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:35.824916 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:35.849352 1498704 cri.go:89] found id: ""
	I1217 02:08:35.849379 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.849388 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:35.849395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:35.849455 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:35.873025 1498704 cri.go:89] found id: ""
	I1217 02:08:35.873045 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.873054 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:35.873060 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:35.873123 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:35.897548 1498704 cri.go:89] found id: ""
	I1217 02:08:35.897572 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.897581 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:35.897586 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:35.897660 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:35.927220 1498704 cri.go:89] found id: ""
	I1217 02:08:35.927283 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.927301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:35.927309 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:35.927374 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:35.955050 1498704 cri.go:89] found id: ""
	I1217 02:08:35.955075 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.955083 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:35.955089 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:35.955168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:35.979074 1498704 cri.go:89] found id: ""
	I1217 02:08:35.979144 1498704 logs.go:282] 0 containers: []
	W1217 02:08:35.979160 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:35.979167 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:35.979228 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:36.005502 1498704 cri.go:89] found id: ""
	I1217 02:08:36.005529 1498704 logs.go:282] 0 containers: []
	W1217 02:08:36.005557 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:36.005568 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:36.005582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:36.022508 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:36.022536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:36.088117 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:36.079050    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.079820    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081330    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.081956    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:36.083620    7042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:36.088139 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:36.088152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:36.112883 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:36.112917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:36.142584 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:36.142610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:37.635249 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:40.135193 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:38.698261 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:38.709807 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:38.709880 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:38.734678 1498704 cri.go:89] found id: ""
	I1217 02:08:38.734703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.734712 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:38.734718 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:38.734777 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:38.764118 1498704 cri.go:89] found id: ""
	I1217 02:08:38.764145 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.764154 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:38.764161 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:38.764223 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:38.792269 1498704 cri.go:89] found id: ""
	I1217 02:08:38.792295 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.792305 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:38.792311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:38.792371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:38.817823 1498704 cri.go:89] found id: ""
	I1217 02:08:38.817845 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.817854 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:38.817861 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:38.817921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:38.846444 1498704 cri.go:89] found id: ""
	I1217 02:08:38.846469 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.846478 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:38.846484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:38.846575 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:38.870805 1498704 cri.go:89] found id: ""
	I1217 02:08:38.870830 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.870839 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:38.870845 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:38.870909 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:38.902022 1498704 cri.go:89] found id: ""
	I1217 02:08:38.902047 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.902056 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:38.902063 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:38.902127 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:38.925802 1498704 cri.go:89] found id: ""
	I1217 02:08:38.925831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:38.925851 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:38.925860 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:38.925871 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:38.991113 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:38.991154 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:39.006019 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:39.006049 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:39.074269 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:39.065736    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.066593    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068157    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.068459    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:39.070010    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:39.074328 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:39.074342 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:39.099793 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:39.099827 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:41.629026 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.643330 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:41.643411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:41.702722 1498704 cri.go:89] found id: ""
	I1217 02:08:41.702743 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.702752 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:41.702758 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:41.702817 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:41.727343 1498704 cri.go:89] found id: ""
	I1217 02:08:41.727368 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.727377 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:41.727383 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:41.727443 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:41.752306 1498704 cri.go:89] found id: ""
	I1217 02:08:41.752331 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.752340 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:41.752346 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:41.752409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:41.777003 1498704 cri.go:89] found id: ""
	I1217 02:08:41.777078 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.777101 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:41.777121 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:41.777225 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:41.801272 1498704 cri.go:89] found id: ""
	I1217 02:08:41.801298 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.801306 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:41.801313 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:41.801371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:41.827046 1498704 cri.go:89] found id: ""
	I1217 02:08:41.827070 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.827078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:41.827085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:41.827142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:41.855924 1498704 cri.go:89] found id: ""
	I1217 02:08:41.855956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.855965 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:41.855972 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:41.856042 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:41.882797 1498704 cri.go:89] found id: ""
	I1217 02:08:41.882821 1498704 logs.go:282] 0 containers: []
	W1217 02:08:41.882830 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:41.882840 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:41.882856 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:41.897281 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:41.897316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:41.963310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:41.955481    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.955893    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957340    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.957676    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:41.959334    7270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:41.963333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:41.963344 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:41.988494 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:41.988529 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:42.019738 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:42.019770 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:08:42.135661 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:44.635135 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:44.578521 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.589302 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:44.589376 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:44.614651 1498704 cri.go:89] found id: ""
	I1217 02:08:44.614676 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.614685 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:44.614692 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:44.614755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:44.666392 1498704 cri.go:89] found id: ""
	I1217 02:08:44.666414 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.666422 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:44.666429 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:44.666487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:44.722566 1498704 cri.go:89] found id: ""
	I1217 02:08:44.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.722599 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:44.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:44.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:44.747631 1498704 cri.go:89] found id: ""
	I1217 02:08:44.747656 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.747665 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:44.747671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:44.747730 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:44.775719 1498704 cri.go:89] found id: ""
	I1217 02:08:44.775756 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.775765 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:44.775773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:44.775846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:44.801032 1498704 cri.go:89] found id: ""
	I1217 02:08:44.801056 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.801066 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:44.801072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:44.801131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:44.827838 1498704 cri.go:89] found id: ""
	I1217 02:08:44.827872 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.827883 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:44.827890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:44.827961 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:44.852948 1498704 cri.go:89] found id: ""
	I1217 02:08:44.852981 1498704 logs.go:282] 0 containers: []
	W1217 02:08:44.852990 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:44.853000 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:44.853011 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:44.908280 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:44.908314 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:44.923445 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:44.923538 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:44.992600 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:44.983987    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.984836    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986288    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.986703    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:44.987942    7388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:44.992624 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:44.992637 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:45.027924 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:45.027975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:47.587759 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.598591 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:47.598664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:47.660378 1498704 cri.go:89] found id: ""
	I1217 02:08:47.660400 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.660408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:47.660414 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:47.660472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:47.708467 1498704 cri.go:89] found id: ""
	I1217 02:08:47.708489 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.708498 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:47.708504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:47.708563 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:47.733161 1498704 cri.go:89] found id: ""
	I1217 02:08:47.733183 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.733191 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:47.733198 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:47.733264 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:47.759190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.759213 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.759222 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:47.759228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:47.759285 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:47.787579 1498704 cri.go:89] found id: ""
	I1217 02:08:47.787601 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.787610 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:47.787616 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:47.787697 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:47.816190 1498704 cri.go:89] found id: ""
	I1217 02:08:47.816215 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.816224 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:47.816231 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:47.816323 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:47.843534 1498704 cri.go:89] found id: ""
	I1217 02:08:47.843562 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.843572 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:47.843578 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:47.843643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1217 02:08:47.135060 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:49.634635 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:47.867806 1498704 cri.go:89] found id: ""
	I1217 02:08:47.867831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:47.867841 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:47.867852 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:47.867870 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:47.926619 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:47.926658 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:47.941706 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:47.941734 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:48.009461 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:47.999838    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.000525    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002461    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.002852    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:48.004815    7498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:48.009539 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:48.009561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:48.035273 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:48.035311 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.567421 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.578623 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:50.578694 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:50.607374 1498704 cri.go:89] found id: ""
	I1217 02:08:50.607396 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.607405 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.607411 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:50.607472 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:50.666455 1498704 cri.go:89] found id: ""
	I1217 02:08:50.666484 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.666493 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.666499 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:50.666559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:50.717784 1498704 cri.go:89] found id: ""
	I1217 02:08:50.717822 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.717831 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.717838 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:50.717941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:50.748500 1498704 cri.go:89] found id: ""
	I1217 02:08:50.748531 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.748543 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.748550 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:50.748618 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:50.774642 1498704 cri.go:89] found id: ""
	I1217 02:08:50.774668 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.774677 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.774683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:50.774742 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:50.803738 1498704 cri.go:89] found id: ""
	I1217 02:08:50.803760 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.803769 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.803776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:50.803840 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:50.828145 1498704 cri.go:89] found id: ""
	I1217 02:08:50.828212 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.828238 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.828256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:50.828335 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:50.853950 1498704 cri.go:89] found id: ""
	I1217 02:08:50.853976 1498704 logs.go:282] 0 containers: []
	W1217 02:08:50.853985 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.853995 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.854006 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.910278 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.910316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.924980 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.925008 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.992234 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.983666    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.984234    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986046    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.986522    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.988273    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.992257 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:50.992271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:51.018744 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:51.018778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:52.134591 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:54.134633 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:53.547953 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.558518 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:53.558593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:53.583100 1498704 cri.go:89] found id: ""
	I1217 02:08:53.583125 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.583134 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.583141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:53.583202 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:53.607925 1498704 cri.go:89] found id: ""
	I1217 02:08:53.607948 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.607956 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.607962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:53.608023 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:53.657081 1498704 cri.go:89] found id: ""
	I1217 02:08:53.657104 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.657127 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.657135 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:53.657208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:53.704278 1498704 cri.go:89] found id: ""
	I1217 02:08:53.704305 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.704313 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.704321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:53.704381 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:53.730823 1498704 cri.go:89] found id: ""
	I1217 02:08:53.730851 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.730860 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.730868 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:53.730928 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:53.757094 1498704 cri.go:89] found id: ""
	I1217 02:08:53.757116 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.757125 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.757132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:53.757192 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:53.786671 1498704 cri.go:89] found id: ""
	I1217 02:08:53.786696 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.786705 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.786711 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:53.786768 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:53.810935 1498704 cri.go:89] found id: ""
	I1217 02:08:53.810957 1498704 logs.go:282] 0 containers: []
	W1217 02:08:53.810966 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.810975 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.810986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.866107 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.866140 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:53.881003 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.881037 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.945396 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.937325    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.937758    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939350    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.939916    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.941498    7719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.945419 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:53.945432 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:53.973428 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.973469 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:56.504673 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.515738 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:56.515816 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:56.540741 1498704 cri.go:89] found id: ""
	I1217 02:08:56.540765 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.540773 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.540780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:56.540846 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:56.565810 1498704 cri.go:89] found id: ""
	I1217 02:08:56.565831 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.565840 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.565846 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:56.565907 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:56.596074 1498704 cri.go:89] found id: ""
	I1217 02:08:56.596096 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.596105 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.596112 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:56.596173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:56.636207 1498704 cri.go:89] found id: ""
	I1217 02:08:56.636229 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.636238 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.636244 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:56.636304 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:56.698720 1498704 cri.go:89] found id: ""
	I1217 02:08:56.698749 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.698758 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.698765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:56.698838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:56.732897 1498704 cri.go:89] found id: ""
	I1217 02:08:56.732918 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.732926 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.732933 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:56.732999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:56.762677 1498704 cri.go:89] found id: ""
	I1217 02:08:56.762703 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.762712 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.762719 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:56.762779 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:56.788307 1498704 cri.go:89] found id: ""
	I1217 02:08:56.788333 1498704 logs.go:282] 0 containers: []
	W1217 02:08:56.788342 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.788352 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.788364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.844513 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.844548 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.858936 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.858968 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.925270 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.917063    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.917492    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919354    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.919838    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.921299    7830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.925293 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:56.925305 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:08:56.951928 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.951967 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:08:56.634544 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:08:58.634782 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:01.135356 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:08:59.483487 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.494825 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:08:59.494899 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:08:59.520751 1498704 cri.go:89] found id: ""
	I1217 02:08:59.520777 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.520785 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.520792 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:08:59.520851 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:08:59.546097 1498704 cri.go:89] found id: ""
	I1217 02:08:59.546122 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.546131 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.546138 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:08:59.546205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:08:59.571525 1498704 cri.go:89] found id: ""
	I1217 02:08:59.571548 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.571556 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.571562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:08:59.571635 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:08:59.595916 1498704 cri.go:89] found id: ""
	I1217 02:08:59.595944 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.595952 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.595959 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:08:59.596021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:08:59.677470 1498704 cri.go:89] found id: ""
	I1217 02:08:59.677497 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.677506 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.677512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:08:59.677577 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:08:59.708285 1498704 cri.go:89] found id: ""
	I1217 02:08:59.708311 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.708320 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.708328 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:08:59.708388 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:08:59.735444 1498704 cri.go:89] found id: ""
	I1217 02:08:59.735466 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.735474 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.735481 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:08:59.735551 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:08:59.758934 1498704 cri.go:89] found id: ""
	I1217 02:08:59.758956 1498704 logs.go:282] 0 containers: []
	W1217 02:08:59.758964 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.758974 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.758985 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.786487 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.786513 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.843688 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.843719 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.858632 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.858661 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.922844 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.914351    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.915099    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.916764    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.917476    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.919123    7952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:59.922867 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:08:59.922888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.448942 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.459473 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:02.459570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:02.487463 1498704 cri.go:89] found id: ""
	I1217 02:09:02.487486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.487494 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.487529 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:02.487591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:02.516013 1498704 cri.go:89] found id: ""
	I1217 02:09:02.516038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.516047 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.516053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:02.516118 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:02.541783 1498704 cri.go:89] found id: ""
	I1217 02:09:02.541806 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.541814 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.541820 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:02.541876 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:02.566427 1498704 cri.go:89] found id: ""
	I1217 02:09:02.566450 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.566459 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.566465 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:02.566561 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:02.590894 1498704 cri.go:89] found id: ""
	I1217 02:09:02.590917 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.590926 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.590932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:02.590998 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:02.614645 1498704 cri.go:89] found id: ""
	I1217 02:09:02.614668 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.614677 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.614683 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:02.614747 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:02.656626 1498704 cri.go:89] found id: ""
	I1217 02:09:02.656662 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.656671 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.656681 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:02.656751 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:02.702753 1498704 cri.go:89] found id: ""
	I1217 02:09:02.702787 1498704 logs.go:282] 0 containers: []
	W1217 02:09:02.702796 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.702806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.702817 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.772243 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.763014    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764176    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.764883    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.766623    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.767262    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.772266 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:02.772278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:02.797608 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.797893 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:02.829032 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.829057 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:03.634729 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:06.135608 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:02.886939 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.886975 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.401718 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.412408 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:05.412488 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:05.441786 1498704 cri.go:89] found id: ""
	I1217 02:09:05.441821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.441830 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.441837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:05.441908 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:05.466385 1498704 cri.go:89] found id: ""
	I1217 02:09:05.466408 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.466416 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.466422 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:05.466481 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:05.491033 1498704 cri.go:89] found id: ""
	I1217 02:09:05.491057 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.491066 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.491072 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:05.491131 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:05.515650 1498704 cri.go:89] found id: ""
	I1217 02:09:05.515675 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.515684 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.515691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:05.515753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:05.539973 1498704 cri.go:89] found id: ""
	I1217 02:09:05.539996 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.540004 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.540016 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:05.540077 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:05.565317 1498704 cri.go:89] found id: ""
	I1217 02:09:05.565338 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.565347 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.565353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:05.565414 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:05.590136 1498704 cri.go:89] found id: ""
	I1217 02:09:05.590161 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.590169 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.590176 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:05.590240 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:05.614696 1498704 cri.go:89] found id: ""
	I1217 02:09:05.614733 1498704 logs.go:282] 0 containers: []
	W1217 02:09:05.614742 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.614752 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.614762 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.682980 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.683022 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.700674 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.700704 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.777617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.769023    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.769587    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771276    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.771881    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.773684    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.777635 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:05.777670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:05.803121 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.803155 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:08.635331 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	W1217 02:09:10.635438 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:08.332434 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.343036 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:08.343108 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:08.367411 1498704 cri.go:89] found id: ""
	I1217 02:09:08.367434 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.367443 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.367449 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:08.367517 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:08.391668 1498704 cri.go:89] found id: ""
	I1217 02:09:08.391695 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.391704 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.391712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:08.391775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:08.415929 1498704 cri.go:89] found id: ""
	I1217 02:09:08.415953 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.415961 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.415968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:08.416050 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:08.441685 1498704 cri.go:89] found id: ""
	I1217 02:09:08.441755 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.441779 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.441798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:08.441888 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:08.466687 1498704 cri.go:89] found id: ""
	I1217 02:09:08.466713 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.466722 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.466728 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:08.466808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:08.491044 1498704 cri.go:89] found id: ""
	I1217 02:09:08.491069 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.491078 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.491085 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:08.491190 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:08.517483 1498704 cri.go:89] found id: ""
	I1217 02:09:08.517508 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.517517 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.517524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:08.517593 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:08.543991 1498704 cri.go:89] found id: ""
	I1217 02:09:08.544017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:08.544026 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.544035 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.544053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.608510 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.608567 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.642989 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.643026 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.751212 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.742256    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.742985    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.744633    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.745089    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.746902    8283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.751241 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:08.751254 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:08.779142 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.779180 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.312760 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.327627 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:11.327714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:11.352557 1498704 cri.go:89] found id: ""
	I1217 02:09:11.352580 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.352588 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.352595 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:11.352654 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:11.378891 1498704 cri.go:89] found id: ""
	I1217 02:09:11.378913 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.378922 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.378928 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:11.378987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:11.403393 1498704 cri.go:89] found id: ""
	I1217 02:09:11.403416 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.403424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.403430 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:11.403489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:11.432435 1498704 cri.go:89] found id: ""
	I1217 02:09:11.432459 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.432472 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.432479 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:11.432565 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:11.458410 1498704 cri.go:89] found id: ""
	I1217 02:09:11.458436 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.458445 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.458451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:11.458510 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:11.484113 1498704 cri.go:89] found id: ""
	I1217 02:09:11.484140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.484149 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.484156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:11.484216 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:11.511088 1498704 cri.go:89] found id: ""
	I1217 02:09:11.511112 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.511121 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.511128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:11.511191 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:11.540295 1498704 cri.go:89] found id: ""
	I1217 02:09:11.540324 1498704 logs.go:282] 0 containers: []
	W1217 02:09:11.540333 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.540342 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.540354 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.554828 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.554857 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:11.615811 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:11.608151    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.608715    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610198    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.610600    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:11.612023    8388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:11.615835 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:11.615849 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:11.643999 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.644035 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.696705 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.696733 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:09:13.134531 1494358 node_ready.go:55] error getting node "no-preload-178365" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-178365": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 02:09:14.634797 1494358 node_ready.go:38] duration metric: took 6m0.000749408s for node "no-preload-178365" to be "Ready" ...
	I1217 02:09:14.638073 1494358 out.go:203] 
	W1217 02:09:14.640977 1494358 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:09:14.641013 1494358 out.go:285] * 
	W1217 02:09:14.643229 1494358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:09:14.646121 1494358 out.go:203] 
	I1217 02:09:14.265939 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.276062 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:14.276129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:14.301710 1498704 cri.go:89] found id: ""
	I1217 02:09:14.301736 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.301744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.301753 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:14.301811 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:14.327085 1498704 cri.go:89] found id: ""
	I1217 02:09:14.327111 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.327119 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.327125 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:14.327182 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:14.351112 1498704 cri.go:89] found id: ""
	I1217 02:09:14.351134 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.351142 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.351148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:14.351208 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:14.379796 1498704 cri.go:89] found id: ""
	I1217 02:09:14.379823 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.379833 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.379840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:14.379902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:14.404135 1498704 cri.go:89] found id: ""
	I1217 02:09:14.404158 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.404167 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.404172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:14.404234 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:14.428171 1498704 cri.go:89] found id: ""
	I1217 02:09:14.428194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.428204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.428212 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:14.428272 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:14.455193 1498704 cri.go:89] found id: ""
	I1217 02:09:14.455217 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.455225 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.455232 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:14.455292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:14.479959 1498704 cri.go:89] found id: ""
	I1217 02:09:14.479985 1498704 logs.go:282] 0 containers: []
	W1217 02:09:14.479994 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.480003 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.480014 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.537013 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.537048 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:14.551864 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:14.551888 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:14.616449 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:14.607973    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.608950    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610555    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.610852    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:14.612336    8500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:14.616522 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:14.616551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:14.646206 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:14.646248 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.269774 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.280406 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:17.280478 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:17.305501 1498704 cri.go:89] found id: ""
	I1217 02:09:17.305529 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.305537 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.305544 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:17.305601 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:17.330336 1498704 cri.go:89] found id: ""
	I1217 02:09:17.330361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.330370 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.330377 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:17.330436 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:17.355210 1498704 cri.go:89] found id: ""
	I1217 02:09:17.355235 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.355250 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.355256 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:17.355315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:17.380868 1498704 cri.go:89] found id: ""
	I1217 02:09:17.380893 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.380901 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.380908 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:17.380968 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:17.406748 1498704 cri.go:89] found id: ""
	I1217 02:09:17.406771 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.406779 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.406785 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:17.406844 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:17.431237 1498704 cri.go:89] found id: ""
	I1217 02:09:17.431263 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.431272 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.431279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:17.431337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:17.455474 1498704 cri.go:89] found id: ""
	I1217 02:09:17.455500 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.455516 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.455523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:17.455586 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:17.479040 1498704 cri.go:89] found id: ""
	I1217 02:09:17.479062 1498704 logs.go:282] 0 containers: []
	W1217 02:09:17.479070 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.479079 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:17.479092 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.511305 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.511333 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:17.567635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:17.567672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:17.583863 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:17.583892 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:17.655165 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:17.640581    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.647186    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.648023    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.649700    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:17.650002    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:17.655185 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:17.655198 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.181833 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.192614 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:20.192732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:20.219176 1498704 cri.go:89] found id: ""
	I1217 02:09:20.219199 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.219208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.219215 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:20.219275 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:20.248198 1498704 cri.go:89] found id: ""
	I1217 02:09:20.248224 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.248233 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.248239 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:20.248299 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:20.273332 1498704 cri.go:89] found id: ""
	I1217 02:09:20.273355 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.273363 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.273370 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:20.273429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:20.299548 1498704 cri.go:89] found id: ""
	I1217 02:09:20.299621 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.299655 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.299668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:20.299741 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:20.328882 1498704 cri.go:89] found id: ""
	I1217 02:09:20.328911 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.328919 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.328925 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:20.328987 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:20.354861 1498704 cri.go:89] found id: ""
	I1217 02:09:20.354887 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.354898 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.354904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:20.354999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:20.380708 1498704 cri.go:89] found id: ""
	I1217 02:09:20.380744 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.380754 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:20.380761 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:20.380833 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:20.410724 1498704 cri.go:89] found id: ""
	I1217 02:09:20.410749 1498704 logs.go:282] 0 containers: []
	W1217 02:09:20.410758 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:20.410767 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:20.410778 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:20.470014 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:20.470053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:20.484955 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:20.484989 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:20.548617 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:20.540418    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.540939    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542451    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.542783    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:20.544309    8723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:20.548637 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:20.548649 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:20.573994 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:20.574030 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:23.106211 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.116663 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:23.116732 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:23.144995 1498704 cri.go:89] found id: ""
	I1217 02:09:23.145017 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.145025 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.145031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:23.145089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:23.172623 1498704 cri.go:89] found id: ""
	I1217 02:09:23.172651 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.172660 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.172668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:23.172727 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:23.201388 1498704 cri.go:89] found id: ""
	I1217 02:09:23.201415 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.201424 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.201437 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:23.201500 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:23.225335 1498704 cri.go:89] found id: ""
	I1217 02:09:23.225361 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.225370 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.225376 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:23.225433 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:23.251629 1498704 cri.go:89] found id: ""
	I1217 02:09:23.251654 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.251662 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:23.251668 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:23.251733 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:23.279092 1498704 cri.go:89] found id: ""
	I1217 02:09:23.279120 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.279129 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:23.279136 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:23.279199 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:23.303104 1498704 cri.go:89] found id: ""
	I1217 02:09:23.303126 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.303134 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:23.303140 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:23.303204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:23.327448 1498704 cri.go:89] found id: ""
	I1217 02:09:23.327479 1498704 logs.go:282] 0 containers: []
	W1217 02:09:23.327488 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:23.327497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:23.327544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:23.394139 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:23.394186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:23.409933 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:23.409961 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:23.478459 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:23.469807    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.470444    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472084    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.472563    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:23.474208    8839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:23.478484 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:23.478498 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:23.503474 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:23.503515 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:26.036615 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.047567 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:26.047682 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:26.072876 1498704 cri.go:89] found id: ""
	I1217 02:09:26.072903 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.072912 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.072919 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:26.072981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:26.100352 1498704 cri.go:89] found id: ""
	I1217 02:09:26.100378 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.100387 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:26.100392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:26.100450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:26.135848 1498704 cri.go:89] found id: ""
	I1217 02:09:26.135875 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.135884 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:26.135890 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:26.135950 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:26.168993 1498704 cri.go:89] found id: ""
	I1217 02:09:26.169020 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.169028 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:26.169035 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:26.169094 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:26.210553 1498704 cri.go:89] found id: ""
	I1217 02:09:26.210581 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.210590 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:26.210597 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:26.210659 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:26.236497 1498704 cri.go:89] found id: ""
	I1217 02:09:26.236526 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.236534 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:26.236541 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:26.236600 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:26.261964 1498704 cri.go:89] found id: ""
	I1217 02:09:26.261989 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.261997 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:26.262004 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:26.262090 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:26.288105 1498704 cri.go:89] found id: ""
	I1217 02:09:26.288138 1498704 logs.go:282] 0 containers: []
	W1217 02:09:26.288148 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:26.288157 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:26.288168 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:26.343617 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:26.343650 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:26.358285 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:26.358312 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:26.424304 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:26.416160    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.416803    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418278    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.418710    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:26.420219    8949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:26.424327 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:26.424340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:26.450148 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:26.450185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:28.978571 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:28.990745 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:28.990835 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:29.015938 1498704 cri.go:89] found id: ""
	I1217 02:09:29.015962 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.015971 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:29.015977 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:29.016035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:29.041116 1498704 cri.go:89] found id: ""
	I1217 02:09:29.041141 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.041149 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:29.041156 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:29.041217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:29.066014 1498704 cri.go:89] found id: ""
	I1217 02:09:29.066036 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.066044 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:29.066051 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:29.066107 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:29.090514 1498704 cri.go:89] found id: ""
	I1217 02:09:29.090539 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.090548 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:29.090554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:29.090640 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:29.114384 1498704 cri.go:89] found id: ""
	I1217 02:09:29.114405 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.114414 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:29.114420 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:29.114506 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:29.143954 1498704 cri.go:89] found id: ""
	I1217 02:09:29.143977 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.143987 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:29.143995 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:29.144081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:29.185816 1498704 cri.go:89] found id: ""
	I1217 02:09:29.185839 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.185847 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:29.185864 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:29.185941 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:29.214738 1498704 cri.go:89] found id: ""
	I1217 02:09:29.214761 1498704 logs.go:282] 0 containers: []
	W1217 02:09:29.214770 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:29.214780 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:29.214807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:29.244598 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:29.244623 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:29.300237 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:29.300271 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.314809 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:29.314874 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:29.380612 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:29.372801    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.373452    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375018    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.375313    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:29.376773    9071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:29.380633 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:29.380645 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:31.905779 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:31.917874 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:31.917963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:31.946726 1498704 cri.go:89] found id: ""
	I1217 02:09:31.946750 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.946759 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:31.946766 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:31.946829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:31.971653 1498704 cri.go:89] found id: ""
	I1217 02:09:31.971677 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.971685 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:31.971691 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:31.971753 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:31.999116 1498704 cri.go:89] found id: ""
	I1217 02:09:31.999139 1498704 logs.go:282] 0 containers: []
	W1217 02:09:31.999147 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:31.999160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:31.999224 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:32.028438 1498704 cri.go:89] found id: ""
	I1217 02:09:32.028461 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.028470 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:32.028476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:32.028535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:32.053600 1498704 cri.go:89] found id: ""
	I1217 02:09:32.053623 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.053632 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:32.053639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:32.053734 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:32.080000 1498704 cri.go:89] found id: ""
	I1217 02:09:32.080023 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.080032 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:32.080038 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:32.080100 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:32.105557 1498704 cri.go:89] found id: ""
	I1217 02:09:32.105632 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.105700 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:32.105721 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:32.105814 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:32.142478 1498704 cri.go:89] found id: ""
	I1217 02:09:32.142506 1498704 logs.go:282] 0 containers: []
	W1217 02:09:32.142515 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:32.142524 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:32.142536 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:32.158591 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:32.158625 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:32.222822 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:32.214771    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.215306    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.216819    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.217218    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:32.218806    9173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:32.222896 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:32.222917 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:32.248192 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:32.248226 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:32.275127 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:32.275152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:34.830607 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:34.841178 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:34.841251 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:34.866230 1498704 cri.go:89] found id: ""
	I1217 02:09:34.866254 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.866263 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:34.866270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:34.866347 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:34.895167 1498704 cri.go:89] found id: ""
	I1217 02:09:34.895234 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.895251 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:34.895258 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:34.895317 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:34.924481 1498704 cri.go:89] found id: ""
	I1217 02:09:34.924521 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.924530 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:34.924537 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:34.924608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:34.953744 1498704 cri.go:89] found id: ""
	I1217 02:09:34.953814 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.953830 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:34.953837 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:34.953910 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:34.978668 1498704 cri.go:89] found id: ""
	I1217 02:09:34.978735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:34.978755 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:34.978763 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:34.978823 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:35.010506 1498704 cri.go:89] found id: ""
	I1217 02:09:35.010545 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.010554 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:35.010562 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:35.010649 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:35.037564 1498704 cri.go:89] found id: ""
	I1217 02:09:35.037591 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.037601 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:35.037607 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:35.037720 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:35.063033 1498704 cri.go:89] found id: ""
	I1217 02:09:35.063072 1498704 logs.go:282] 0 containers: []
	W1217 02:09:35.063093 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:35.063107 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:35.063123 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:35.119982 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:35.120059 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:35.136426 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:35.136504 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:35.210581 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:35.202047    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.202917    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204671    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.204983    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:35.206608    9285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:35.210605 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:35.210617 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:35.235901 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:35.235932 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:37.769826 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:37.780267 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:37.780361 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:37.804770 1498704 cri.go:89] found id: ""
	I1217 02:09:37.804835 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.804858 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:37.804876 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:37.804947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:37.828942 1498704 cri.go:89] found id: ""
	I1217 02:09:37.828981 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.829006 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:37.829019 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:37.829098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:37.856624 1498704 cri.go:89] found id: ""
	I1217 02:09:37.856689 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.856714 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:37.856733 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:37.856808 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:37.895741 1498704 cri.go:89] found id: ""
	I1217 02:09:37.895779 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.895789 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:37.895796 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:37.895870 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:37.928762 1498704 cri.go:89] found id: ""
	I1217 02:09:37.928795 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.928804 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:37.928811 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:37.928889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:37.964505 1498704 cri.go:89] found id: ""
	I1217 02:09:37.964530 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.964540 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:37.964557 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:37.964622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:37.990281 1498704 cri.go:89] found id: ""
	I1217 02:09:37.990306 1498704 logs.go:282] 0 containers: []
	W1217 02:09:37.990315 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:37.990321 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:37.990409 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:38.022757 1498704 cri.go:89] found id: ""
	I1217 02:09:38.022789 1498704 logs.go:282] 0 containers: []
	W1217 02:09:38.022799 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:38.022819 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:38.022839 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:38.082781 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:38.082818 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:38.098274 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:38.098303 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:38.181369 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:38.171482    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.171936    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.173835    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.174572    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:38.176483    9393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:38.181394 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:38.181408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:38.211421 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:38.211459 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:40.744187 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:40.755584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:40.755657 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:40.784265 1498704 cri.go:89] found id: ""
	I1217 02:09:40.784290 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.784299 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:40.784305 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:40.784366 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:40.812965 1498704 cri.go:89] found id: ""
	I1217 02:09:40.813034 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.813059 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:40.813077 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:40.813170 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:40.838108 1498704 cri.go:89] found id: ""
	I1217 02:09:40.838135 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.838144 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:40.838150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:40.838218 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:40.863761 1498704 cri.go:89] found id: ""
	I1217 02:09:40.863797 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.863806 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:40.863814 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:40.863883 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:40.896946 1498704 cri.go:89] found id: ""
	I1217 02:09:40.896973 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.896982 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:40.896990 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:40.897049 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:40.927040 1498704 cri.go:89] found id: ""
	I1217 02:09:40.927067 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.927076 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:40.927083 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:40.927142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:40.953843 1498704 cri.go:89] found id: ""
	I1217 02:09:40.953869 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.953878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:40.953885 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:40.953947 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:40.983898 1498704 cri.go:89] found id: ""
	I1217 02:09:40.983921 1498704 logs.go:282] 0 containers: []
	W1217 02:09:40.983929 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:40.983938 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:40.983950 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:41.041172 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:41.041208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:41.056418 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:41.056454 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:41.119760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:41.111904    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.112302    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.113988    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.114436    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:41.115839    9506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:41.119832 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:41.119859 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:41.148272 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:41.148479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.682654 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:43.694991 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:43.695064 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:43.722566 1498704 cri.go:89] found id: ""
	I1217 02:09:43.722590 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.722599 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:43.722605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:43.722664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:43.747132 1498704 cri.go:89] found id: ""
	I1217 02:09:43.747157 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.747165 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:43.747177 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:43.747238 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:43.773465 1498704 cri.go:89] found id: ""
	I1217 02:09:43.773486 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.773494 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:43.773500 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:43.773559 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:43.798692 1498704 cri.go:89] found id: ""
	I1217 02:09:43.798716 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.798725 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:43.798731 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:43.798796 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:43.825731 1498704 cri.go:89] found id: ""
	I1217 02:09:43.825753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.825762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:43.825768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:43.825827 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:43.855796 1498704 cri.go:89] found id: ""
	I1217 02:09:43.855821 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.855829 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:43.855836 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:43.855902 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:43.886935 1498704 cri.go:89] found id: ""
	I1217 02:09:43.886960 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.886969 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:43.886975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:43.887035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:43.917934 1498704 cri.go:89] found id: ""
	I1217 02:09:43.917961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:43.917970 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:43.917979 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:43.917997 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:43.947632 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:43.947659 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:44.003825 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:44.003866 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:44.019941 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:44.019972 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:44.089358 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:44.081196    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.081940    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.083656    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.084150    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:44.085419    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:44.089380 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:44.089394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.615402 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:46.625887 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:46.625979 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:46.650868 1498704 cri.go:89] found id: ""
	I1217 02:09:46.650891 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.650899 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:46.650906 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:46.650966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:46.675004 1498704 cri.go:89] found id: ""
	I1217 02:09:46.675025 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.675033 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:46.675039 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:46.675098 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:46.698859 1498704 cri.go:89] found id: ""
	I1217 02:09:46.698880 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.698888 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:46.698899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:46.698966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:46.722103 1498704 cri.go:89] found id: ""
	I1217 02:09:46.722130 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.722139 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:46.722146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:46.722205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:46.749559 1498704 cri.go:89] found id: ""
	I1217 02:09:46.749582 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.749591 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:46.749598 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:46.749681 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:46.775252 1498704 cri.go:89] found id: ""
	I1217 02:09:46.775274 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.775282 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:46.775289 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:46.775368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:46.799706 1498704 cri.go:89] found id: ""
	I1217 02:09:46.799738 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.799747 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:46.799754 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:46.799815 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:46.825525 1498704 cri.go:89] found id: ""
	I1217 02:09:46.825552 1498704 logs.go:282] 0 containers: []
	W1217 02:09:46.825562 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:46.825596 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:46.825616 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:46.898518 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:46.889823    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.890505    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892089    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.892616    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:46.894554    9724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:46.898546 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:46.898559 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:46.924328 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:46.924360 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:46.953287 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:46.953315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:47.008776 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:47.008811 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.524226 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:49.535609 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:49.535691 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:49.563709 1498704 cri.go:89] found id: ""
	I1217 02:09:49.563735 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.563744 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:49.563751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:49.563829 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:49.589205 1498704 cri.go:89] found id: ""
	I1217 02:09:49.589229 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.589238 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:49.589245 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:49.589305 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:49.615016 1498704 cri.go:89] found id: ""
	I1217 02:09:49.615038 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.615046 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:49.615053 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:49.615110 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:49.639299 1498704 cri.go:89] found id: ""
	I1217 02:09:49.639377 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.639407 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:49.639416 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:49.639514 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:49.664056 1498704 cri.go:89] found id: ""
	I1217 02:09:49.664079 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.664087 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:49.664093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:49.664151 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:49.688630 1498704 cri.go:89] found id: ""
	I1217 02:09:49.688652 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.688661 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:49.688667 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:49.688724 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:49.712428 1498704 cri.go:89] found id: ""
	I1217 02:09:49.712447 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.712461 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:49.712467 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:49.712525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:49.736311 1498704 cri.go:89] found id: ""
	I1217 02:09:49.736388 1498704 logs.go:282] 0 containers: []
	W1217 02:09:49.736412 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:49.736433 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:49.736473 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:49.792224 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:49.792264 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:49.806602 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:49.806639 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.873760 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.862802    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.863533    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.865385    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.866008    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.867605    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.873781 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:49.873793 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:49.901849 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:49.901881 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.452856 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:52.463628 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:52.463707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:52.487769 1498704 cri.go:89] found id: ""
	I1217 02:09:52.487794 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.487802 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:52.487809 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:52.487901 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:52.515989 1498704 cri.go:89] found id: ""
	I1217 02:09:52.516013 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.516022 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:52.516028 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:52.516136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:52.542514 1498704 cri.go:89] found id: ""
	I1217 02:09:52.542538 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.542547 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:52.542554 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:52.542622 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:52.567016 1498704 cri.go:89] found id: ""
	I1217 02:09:52.567050 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.567059 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:52.567067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:52.567129 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:52.591935 1498704 cri.go:89] found id: ""
	I1217 02:09:52.591961 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.591969 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:52.591975 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:52.592035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:52.617548 1498704 cri.go:89] found id: ""
	I1217 02:09:52.617573 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.617583 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:52.617589 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:52.617668 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:52.642857 1498704 cri.go:89] found id: ""
	I1217 02:09:52.642881 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.642889 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:52.642895 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:52.642952 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:52.666997 1498704 cri.go:89] found id: ""
	I1217 02:09:52.667022 1498704 logs.go:282] 0 containers: []
	W1217 02:09:52.667031 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:52.667042 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:52.667055 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.736175 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.727685    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.728434    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730110    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.730659    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.732265    9950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.736198 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:52.736210 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:52.761310 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.761340 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:52.789730 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:52.789758 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:52.846428 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:52.846464 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.363216 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:55.378169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:55.378242 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:55.405237 1498704 cri.go:89] found id: ""
	I1217 02:09:55.405262 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.405271 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:55.405277 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:55.405341 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:55.431829 1498704 cri.go:89] found id: ""
	I1217 02:09:55.431852 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.431860 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:55.431866 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:55.431924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:55.464126 1498704 cri.go:89] found id: ""
	I1217 02:09:55.464149 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.464157 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:55.464163 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:55.464221 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:55.489098 1498704 cri.go:89] found id: ""
	I1217 02:09:55.489140 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.489174 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:55.489188 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:55.489291 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:55.514718 1498704 cri.go:89] found id: ""
	I1217 02:09:55.514753 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.514762 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:55.514768 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:55.514828 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:55.538941 1498704 cri.go:89] found id: ""
	I1217 02:09:55.538964 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.538972 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:55.538979 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:55.539040 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:55.564206 1498704 cri.go:89] found id: ""
	I1217 02:09:55.564233 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.564242 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:55.564248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:55.564307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:55.588698 1498704 cri.go:89] found id: ""
	I1217 02:09:55.588722 1498704 logs.go:282] 0 containers: []
	W1217 02:09:55.588731 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:55.588740 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:55.588751 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.643314 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.643346 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.657901 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.657933 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.728753 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.720443   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.721112   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722240   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.722829   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.724553   10067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.728775 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:55.728788 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:55.754781 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.754822 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:58.282279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:58.292524 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:09:58.292594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:09:58.320120 1498704 cri.go:89] found id: ""
	I1217 02:09:58.320144 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.320153 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:58.320160 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:09:58.320219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:09:58.344609 1498704 cri.go:89] found id: ""
	I1217 02:09:58.344634 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.344643 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:09:58.344649 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:09:58.344714 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:09:58.371166 1498704 cri.go:89] found id: ""
	I1217 02:09:58.371194 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.371203 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:09:58.371209 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:09:58.371267 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:09:58.399919 1498704 cri.go:89] found id: ""
	I1217 02:09:58.399947 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.399955 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:58.399961 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:09:58.400029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:09:58.426746 1498704 cri.go:89] found id: ""
	I1217 02:09:58.426774 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.426783 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:58.426789 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:09:58.426849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:09:58.452086 1498704 cri.go:89] found id: ""
	I1217 02:09:58.452164 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.452187 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:58.452202 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:09:58.452313 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:09:58.479597 1498704 cri.go:89] found id: ""
	I1217 02:09:58.479640 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.479650 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:58.479657 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:09:58.479735 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:09:58.507631 1498704 cri.go:89] found id: ""
	I1217 02:09:58.507660 1498704 logs.go:282] 0 containers: []
	W1217 02:09:58.507668 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.507677 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.507688 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.563330 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.563364 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.577956 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.577986 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.640599 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.632937   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.633485   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.634953   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.635364   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.636788   10183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.640618 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:09:58.640631 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:09:58.665542 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.665579 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.193230 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:01.205093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:01.205168 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:01.231574 1498704 cri.go:89] found id: ""
	I1217 02:10:01.231657 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.231671 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:01.231679 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:01.231755 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:01.258626 1498704 cri.go:89] found id: ""
	I1217 02:10:01.258656 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.258665 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:01.258671 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:01.258731 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:01.285028 1498704 cri.go:89] found id: ""
	I1217 02:10:01.285107 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.285130 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:01.285150 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:01.285236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:01.311238 1498704 cri.go:89] found id: ""
	I1217 02:10:01.311260 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.311270 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:01.311276 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:01.311337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:01.335915 1498704 cri.go:89] found id: ""
	I1217 02:10:01.335938 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.335946 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.335953 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:01.336013 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:01.362270 1498704 cri.go:89] found id: ""
	I1217 02:10:01.362299 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.362310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.362317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:01.362386 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:01.389194 1498704 cri.go:89] found id: ""
	I1217 02:10:01.389272 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.389296 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.389315 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:01.389404 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:01.425060 1498704 cri.go:89] found id: ""
	I1217 02:10:01.425133 1498704 logs.go:282] 0 containers: []
	W1217 02:10:01.425156 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.425178 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.425214 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.484970 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.485005 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.500061 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.500089 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.568584 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.560770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.561180   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.562770   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.563222   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.564705   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.568606 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:01.568618 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:01.594966 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.595000 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.124707 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:04.138794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:04.138889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:04.192615 1498704 cri.go:89] found id: ""
	I1217 02:10:04.192646 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.192657 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:04.192664 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:04.192738 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:04.223099 1498704 cri.go:89] found id: ""
	I1217 02:10:04.223126 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.223135 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.223142 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:04.223204 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:04.251428 1498704 cri.go:89] found id: ""
	I1217 02:10:04.251451 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.251460 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.251466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:04.251549 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:04.277739 1498704 cri.go:89] found id: ""
	I1217 02:10:04.277767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.277778 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.277786 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:04.277849 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:04.302600 1498704 cri.go:89] found id: ""
	I1217 02:10:04.302625 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.302633 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.302639 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:04.302702 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:04.328192 1498704 cri.go:89] found id: ""
	I1217 02:10:04.328221 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.328230 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.328237 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:04.328307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:04.354026 1498704 cri.go:89] found id: ""
	I1217 02:10:04.354049 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.354058 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.354064 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:04.354125 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:04.387067 1498704 cri.go:89] found id: ""
	I1217 02:10:04.387101 1498704 logs.go:282] 0 containers: []
	W1217 02:10:04.387111 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.387140 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:04.387159 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:04.420944 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.420981 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:04.453477 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.453511 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.509779 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.509814 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.525121 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.525151 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.596992 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.588312   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.589011   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590255   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.590954   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.592734   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.097279 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.107872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:07.107951 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:07.140845 1498704 cri.go:89] found id: ""
	I1217 02:10:07.140873 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.140883 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.140889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:07.140949 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:07.171271 1498704 cri.go:89] found id: ""
	I1217 02:10:07.171293 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.171301 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.171307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:07.171368 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:07.199048 1498704 cri.go:89] found id: ""
	I1217 02:10:07.199075 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.199085 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.199092 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:07.199152 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:07.223715 1498704 cri.go:89] found id: ""
	I1217 02:10:07.223755 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.223765 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.223771 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:07.223838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:07.250683 1498704 cri.go:89] found id: ""
	I1217 02:10:07.250708 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.250718 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.250724 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:07.250783 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:07.274541 1498704 cri.go:89] found id: ""
	I1217 02:10:07.274614 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.274627 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.274661 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:07.274752 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:07.298768 1498704 cri.go:89] found id: ""
	I1217 02:10:07.298833 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.298859 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.298872 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:07.298944 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:07.322447 1498704 cri.go:89] found id: ""
	I1217 02:10:07.322510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:07.322534 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.322549 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.322561 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.392049 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.383394   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.384747   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386434   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.386720   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.388152   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:07.392072 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:07.392086 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:07.419785 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.419819 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.448497 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.448525 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.505149 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.505186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.022238 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.034403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:10.034482 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:10.061856 1498704 cri.go:89] found id: ""
	I1217 02:10:10.061882 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.061891 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.061897 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:10.061976 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:10.089092 1498704 cri.go:89] found id: ""
	I1217 02:10:10.089118 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.089128 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.089141 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:10.089217 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:10.115444 1498704 cri.go:89] found id: ""
	I1217 02:10:10.115467 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.115476 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.115482 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:10.115579 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:10.142860 1498704 cri.go:89] found id: ""
	I1217 02:10:10.142889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.142897 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.142904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:10.142975 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:10.171034 1498704 cri.go:89] found id: ""
	I1217 02:10:10.171061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.171070 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.171076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:10.171135 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:10.201087 1498704 cri.go:89] found id: ""
	I1217 02:10:10.201121 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.201130 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.201137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:10.201206 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:10.227252 1498704 cri.go:89] found id: ""
	I1217 02:10:10.227316 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.227340 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.227353 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:10.227429 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:10.256814 1498704 cri.go:89] found id: ""
	I1217 02:10:10.256850 1498704 logs.go:282] 0 containers: []
	W1217 02:10:10.256859 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.256885 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.256905 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.316432 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.316484 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.331782 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.331807 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.418862 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.410069   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.410617   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.412164   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.413026   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.414651   10629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.418886 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:10.418898 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:10.447108 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.447142 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:12.978148 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.988751 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:12.988821 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:13.014409 1498704 cri.go:89] found id: ""
	I1217 02:10:13.014435 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.014445 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.014452 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:13.014516 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:13.039697 1498704 cri.go:89] found id: ""
	I1217 02:10:13.039725 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.039734 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.039741 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:13.039830 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:13.063238 1498704 cri.go:89] found id: ""
	I1217 02:10:13.063263 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.063272 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.063279 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:13.063337 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:13.087932 1498704 cri.go:89] found id: ""
	I1217 02:10:13.087955 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.087964 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.087970 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:13.088029 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:13.116779 1498704 cri.go:89] found id: ""
	I1217 02:10:13.116824 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.116833 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.116840 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:13.116924 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:13.152355 1498704 cri.go:89] found id: ""
	I1217 02:10:13.152379 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.152388 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.152395 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:13.152462 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:13.178465 1498704 cri.go:89] found id: ""
	I1217 02:10:13.178498 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.178507 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.178513 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:13.178597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:13.204065 1498704 cri.go:89] found id: ""
	I1217 02:10:13.204090 1498704 logs.go:282] 0 containers: []
	W1217 02:10:13.204099 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.204109 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.204119 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:13.260597 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.260643 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.275806 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.275834 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.339094 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.330634   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.331065   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.332876   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.333564   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.335042   10740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.339116 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:13.339128 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:13.364711 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.364742 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:15.901294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:15.915207 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:15.915287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:15.944035 1498704 cri.go:89] found id: ""
	I1217 02:10:15.944062 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.944071 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:15.944078 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:15.944142 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:15.969105 1498704 cri.go:89] found id: ""
	I1217 02:10:15.969132 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.969142 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:15.969148 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:15.969213 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:15.994468 1498704 cri.go:89] found id: ""
	I1217 02:10:15.994495 1498704 logs.go:282] 0 containers: []
	W1217 02:10:15.994505 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:15.994511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:15.994576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:16.021869 1498704 cri.go:89] found id: ""
	I1217 02:10:16.021897 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.021907 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.021914 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:16.021981 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:16.050208 1498704 cri.go:89] found id: ""
	I1217 02:10:16.050236 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.050245 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.050252 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:16.050319 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:16.076004 1498704 cri.go:89] found id: ""
	I1217 02:10:16.076031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.076041 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.076048 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:16.076159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:16.102446 1498704 cri.go:89] found id: ""
	I1217 02:10:16.102526 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.102550 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.102563 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:16.102643 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:16.134280 1498704 cri.go:89] found id: ""
	I1217 02:10:16.134306 1498704 logs.go:282] 0 containers: []
	W1217 02:10:16.134315 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.134325 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.134362 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:16.173187 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.173220 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.231927 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.231960 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.247063 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.247093 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.315647 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.307649   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.308739   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.309576   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.310605   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.311801   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.315668 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:16.315681 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:18.841379 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:18.852146 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:18.852219 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:18.877675 1498704 cri.go:89] found id: ""
	I1217 02:10:18.877750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.877765 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:18.877773 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:18.877839 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:18.903447 1498704 cri.go:89] found id: ""
	I1217 02:10:18.903482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.903491 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:18.903498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:18.903576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:18.929561 1498704 cri.go:89] found id: ""
	I1217 02:10:18.929588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.929597 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:18.929604 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:18.929683 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:18.955239 1498704 cri.go:89] found id: ""
	I1217 02:10:18.955333 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.955350 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:18.955358 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:18.955424 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:18.979922 1498704 cri.go:89] found id: ""
	I1217 02:10:18.979953 1498704 logs.go:282] 0 containers: []
	W1217 02:10:18.979962 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:18.979968 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:18.980035 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:19.007041 1498704 cri.go:89] found id: ""
	I1217 02:10:19.007077 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.007087 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.007093 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:19.007177 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:19.035426 1498704 cri.go:89] found id: ""
	I1217 02:10:19.035450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.035459 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.035466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:19.035542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:19.060135 1498704 cri.go:89] found id: ""
	I1217 02:10:19.060159 1498704 logs.go:282] 0 containers: []
	W1217 02:10:19.060167 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.060200 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.060217 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.116693 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.116728 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.134579 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.134610 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.216066 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.207558   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.208046   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.209922   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.210470   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.212114   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.216089 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:19.216105 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:19.242169 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.242202 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:21.771406 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:21.782951 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:21.783026 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:21.809728 1498704 cri.go:89] found id: ""
	I1217 02:10:21.809750 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.809758 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:21.809765 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:21.809824 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:21.841207 1498704 cri.go:89] found id: ""
	I1217 02:10:21.841233 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.841242 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:21.841248 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:21.841307 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:21.868982 1498704 cri.go:89] found id: ""
	I1217 02:10:21.869008 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.869017 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:21.869023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:21.869102 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:21.895994 1498704 cri.go:89] found id: ""
	I1217 02:10:21.896030 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.896040 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:21.896046 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:21.896117 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:21.927675 1498704 cri.go:89] found id: ""
	I1217 02:10:21.927767 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.927786 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:21.927798 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:21.927886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:21.956133 1498704 cri.go:89] found id: ""
	I1217 02:10:21.956157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.956166 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:21.956172 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:21.956235 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:21.987411 1498704 cri.go:89] found id: ""
	I1217 02:10:21.987442 1498704 logs.go:282] 0 containers: []
	W1217 02:10:21.987451 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:21.987458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:21.987528 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:22.018001 1498704 cri.go:89] found id: ""
	I1217 02:10:22.018031 1498704 logs.go:282] 0 containers: []
	W1217 02:10:22.018041 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.018058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.018072 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.077509 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.077544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:22.094048 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.094152 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.179483 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.170164   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.171129   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.172667   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.173275   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.174996   11074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.179527 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:22.179552 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:22.208002 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.208053 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.745839 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:24.756980 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:24.757073 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:24.781924 1498704 cri.go:89] found id: ""
	I1217 02:10:24.781947 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.781955 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:24.781962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:24.782022 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:24.807686 1498704 cri.go:89] found id: ""
	I1217 02:10:24.807709 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.807718 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:24.807725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:24.807785 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:24.833146 1498704 cri.go:89] found id: ""
	I1217 02:10:24.833177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.833197 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:24.833204 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:24.833268 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:24.859474 1498704 cri.go:89] found id: ""
	I1217 02:10:24.859496 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.859505 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:24.859523 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:24.859585 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:24.885498 1498704 cri.go:89] found id: ""
	I1217 02:10:24.885523 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.885532 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:24.885549 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:24.885608 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:24.910357 1498704 cri.go:89] found id: ""
	I1217 02:10:24.910394 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.910403 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:24.910410 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:24.910487 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:24.935548 1498704 cri.go:89] found id: ""
	I1217 02:10:24.935572 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.935581 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:24.935588 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:24.935650 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:24.961748 1498704 cri.go:89] found id: ""
	I1217 02:10:24.961774 1498704 logs.go:282] 0 containers: []
	W1217 02:10:24.961813 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:24.961831 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:24.961852 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:24.989413 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:24.989488 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.046752 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.046797 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.074232 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.074268 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.166951 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.152840   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.157975   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.158869   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.160827   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.161145   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.166980 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:25.166994 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.699737 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:27.710317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:27.710401 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:27.735667 1498704 cri.go:89] found id: ""
	I1217 02:10:27.735694 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.735703 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:27.735709 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:27.735770 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:27.764035 1498704 cri.go:89] found id: ""
	I1217 02:10:27.764061 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.764070 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:27.764076 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:27.764136 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:27.788237 1498704 cri.go:89] found id: ""
	I1217 02:10:27.788265 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.788273 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:27.788280 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:27.788340 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:27.815686 1498704 cri.go:89] found id: ""
	I1217 02:10:27.815714 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.815723 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:27.815730 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:27.815792 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:27.846482 1498704 cri.go:89] found id: ""
	I1217 02:10:27.846510 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.846518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:27.846525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:27.846584 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:27.871189 1498704 cri.go:89] found id: ""
	I1217 02:10:27.871217 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.871227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:27.871233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:27.871292 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:27.899034 1498704 cri.go:89] found id: ""
	I1217 02:10:27.899056 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.899064 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:27.899070 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:27.899128 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:27.923014 1498704 cri.go:89] found id: ""
	I1217 02:10:27.923037 1498704 logs.go:282] 0 containers: []
	W1217 02:10:27.923046 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:27.923055 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:27.923066 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:27.948254 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:27.948289 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:27.978557 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:27.978582 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.033709 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.033748 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:28.049287 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:28.049315 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:28.120598 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:28.111016   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.111430   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113055   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.113399   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:28.114622   11314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.621228 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:30.633415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:30.633544 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:30.660114 1498704 cri.go:89] found id: ""
	I1217 02:10:30.660186 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.660208 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:30.660228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:30.660315 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:30.687423 1498704 cri.go:89] found id: ""
	I1217 02:10:30.687450 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.687459 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:30.687466 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:30.687542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:30.712536 1498704 cri.go:89] found id: ""
	I1217 02:10:30.712568 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.712577 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:30.712584 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:30.712658 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:30.736913 1498704 cri.go:89] found id: ""
	I1217 02:10:30.736983 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.737007 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:30.737025 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:30.737115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:30.761778 1498704 cri.go:89] found id: ""
	I1217 02:10:30.761852 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.761875 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:30.761889 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:30.761963 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:30.789829 1498704 cri.go:89] found id: ""
	I1217 02:10:30.789854 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.789863 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:30.789869 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:30.789930 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:30.815268 1498704 cri.go:89] found id: ""
	I1217 02:10:30.815296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.815304 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:30.815311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:30.815373 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:30.839769 1498704 cri.go:89] found id: ""
	I1217 02:10:30.839793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:30.839802 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:30.839811 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:30.839823 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:30.854187 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:30.854216 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:30.917680 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:30.908973   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.909688   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911279   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.911863   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:30.913482   11416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:30.917706 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:30.917718 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:30.943267 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:30.943300 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:30.970294 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:30.970374 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.525981 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:33.536356 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:33.536427 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:33.561187 1498704 cri.go:89] found id: ""
	I1217 02:10:33.561210 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.561219 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:33.561225 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:33.561287 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:33.589979 1498704 cri.go:89] found id: ""
	I1217 02:10:33.590002 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.590012 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:33.590023 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:33.590082 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:33.615543 1498704 cri.go:89] found id: ""
	I1217 02:10:33.615567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.615576 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:33.615583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:33.615644 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:33.648052 1498704 cri.go:89] found id: ""
	I1217 02:10:33.648080 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.648089 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:33.648095 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:33.648162 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:33.676343 1498704 cri.go:89] found id: ""
	I1217 02:10:33.676376 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.676386 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:33.676392 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:33.676459 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:33.707262 1498704 cri.go:89] found id: ""
	I1217 02:10:33.707338 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.707353 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:33.707359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:33.707419 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:33.732853 1498704 cri.go:89] found id: ""
	I1217 02:10:33.732920 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.732945 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:33.732963 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:33.733053 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:33.757542 1498704 cri.go:89] found id: ""
	I1217 02:10:33.757567 1498704 logs.go:282] 0 containers: []
	W1217 02:10:33.757576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:33.757585 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:33.757596 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:33.821758 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:33.813865   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.814366   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.815953   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.816345   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:33.817904   11523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:33.821777 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:33.821791 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:33.846519 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:33.846555 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:33.873755 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:33.873782 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:33.930246 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:33.930282 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.445766 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:36.456503 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:36.456576 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:36.483872 1498704 cri.go:89] found id: ""
	I1217 02:10:36.483894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.483903 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:36.483909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:36.483970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:36.508742 1498704 cri.go:89] found id: ""
	I1217 02:10:36.508765 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.508774 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:36.508780 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:36.508838 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:36.535472 1498704 cri.go:89] found id: ""
	I1217 02:10:36.535511 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.535520 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:36.535527 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:36.535591 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:36.566274 1498704 cri.go:89] found id: ""
	I1217 02:10:36.566296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.566305 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:36.566311 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:36.566372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:36.590882 1498704 cri.go:89] found id: ""
	I1217 02:10:36.590904 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.590912 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:36.590918 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:36.590977 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:36.614768 1498704 cri.go:89] found id: ""
	I1217 02:10:36.614793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.614802 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:36.614808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:36.614889 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:36.643752 1498704 cri.go:89] found id: ""
	I1217 02:10:36.643778 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.643787 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:36.643794 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:36.643857 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:36.672151 1498704 cri.go:89] found id: ""
	I1217 02:10:36.672177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:36.672186 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:36.672194 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:36.672208 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:36.733511 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:36.733544 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:36.752180 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:36.752255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:36.815443 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:36.807321   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.807927   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.809664   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.810137   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:36.811712   11640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:36.815465 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:36.815478 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:36.840305 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:36.840349 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:39.373770 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:39.386294 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:39.386380 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:39.420073 1498704 cri.go:89] found id: ""
	I1217 02:10:39.420117 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.420126 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:39.420132 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:39.420210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:39.454303 1498704 cri.go:89] found id: ""
	I1217 02:10:39.454327 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.454338 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:39.454344 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:39.454402 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:39.483117 1498704 cri.go:89] found id: ""
	I1217 02:10:39.483143 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.483152 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:39.483159 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:39.483236 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:39.507851 1498704 cri.go:89] found id: ""
	I1217 02:10:39.507927 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.507942 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:39.507949 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:39.508011 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:39.535318 1498704 cri.go:89] found id: ""
	I1217 02:10:39.535344 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.535353 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:39.535359 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:39.535460 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:39.559510 1498704 cri.go:89] found id: ""
	I1217 02:10:39.559587 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.559602 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:39.559610 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:39.559670 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:39.588446 1498704 cri.go:89] found id: ""
	I1217 02:10:39.588477 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.588487 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:39.588493 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:39.588597 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:39.616016 1498704 cri.go:89] found id: ""
	I1217 02:10:39.616041 1498704 logs.go:282] 0 containers: []
	W1217 02:10:39.616049 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:39.616058 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:39.616069 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:39.678516 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:39.678553 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:39.698413 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:39.698440 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:39.766310 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:39.757858   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.758625   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760117   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.760571   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:39.762054   11752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:39.766333 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:39.766347 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:39.791602 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:39.791641 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:42.319919 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:42.330880 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:42.330962 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:42.355776 1498704 cri.go:89] found id: ""
	I1217 02:10:42.355798 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.355807 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:42.355813 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:42.355872 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:42.393050 1498704 cri.go:89] found id: ""
	I1217 02:10:42.393084 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.393093 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:42.393100 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:42.393159 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:42.426120 1498704 cri.go:89] found id: ""
	I1217 02:10:42.426157 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.426166 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:42.426174 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:42.426245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:42.456881 1498704 cri.go:89] found id: ""
	I1217 02:10:42.456917 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.456926 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:42.456932 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:42.456999 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:42.481272 1498704 cri.go:89] found id: ""
	I1217 02:10:42.481298 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.481307 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:42.481312 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:42.481372 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:42.506468 1498704 cri.go:89] found id: ""
	I1217 02:10:42.506497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.506506 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:42.506512 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:42.506572 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:42.531395 1498704 cri.go:89] found id: ""
	I1217 02:10:42.531460 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.531476 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:42.531484 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:42.531552 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:42.555791 1498704 cri.go:89] found id: ""
	I1217 02:10:42.555814 1498704 logs.go:282] 0 containers: []
	W1217 02:10:42.555822 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:42.555831 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:42.555843 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:42.611764 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:42.611800 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:42.627436 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:42.627463 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:42.717562 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:42.708956   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.709575   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711303   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.711863   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:42.713690   11868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:42.717584 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:42.717597 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:42.742727 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:42.742763 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:45.269723 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:45.281660 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:45.281736 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:45.307916 1498704 cri.go:89] found id: ""
	I1217 02:10:45.307941 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.307950 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:45.307956 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:45.308021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:45.337837 1498704 cri.go:89] found id: ""
	I1217 02:10:45.337862 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.337871 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:45.337878 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:45.337943 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:45.382867 1498704 cri.go:89] found id: ""
	I1217 02:10:45.382894 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.382903 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:45.382909 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:45.382970 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:45.424600 1498704 cri.go:89] found id: ""
	I1217 02:10:45.424629 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.424637 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:45.424644 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:45.424707 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:45.456469 1498704 cri.go:89] found id: ""
	I1217 02:10:45.456497 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.456505 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:45.456511 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:45.456574 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:45.482345 1498704 cri.go:89] found id: ""
	I1217 02:10:45.482370 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.482378 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:45.482385 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:45.482450 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:45.507901 1498704 cri.go:89] found id: ""
	I1217 02:10:45.507930 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.507948 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:45.507955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:45.508065 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:45.532875 1498704 cri.go:89] found id: ""
	I1217 02:10:45.532896 1498704 logs.go:282] 0 containers: []
	W1217 02:10:45.532904 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:45.532913 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:45.532924 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:45.589239 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:45.589273 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:45.604011 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:45.604045 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:45.695710 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:45.686715   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.687431   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689161   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.689946   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:45.691789   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:45.695788 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:45.695808 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:45.721274 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:45.721310 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.251294 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:48.261750 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.261825 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.286414 1498704 cri.go:89] found id: ""
	I1217 02:10:48.286441 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.286450 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.286457 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.286515 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.315314 1498704 cri.go:89] found id: ""
	I1217 02:10:48.315336 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.315344 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.315351 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.315411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.339435 1498704 cri.go:89] found id: ""
	I1217 02:10:48.339461 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.339469 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.339476 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.339543 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.363969 1498704 cri.go:89] found id: ""
	I1217 02:10:48.364045 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.364061 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.364069 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.364134 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.391387 1498704 cri.go:89] found id: ""
	I1217 02:10:48.391409 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.391418 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.391425 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.391489 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.422985 1498704 cri.go:89] found id: ""
	I1217 02:10:48.423006 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.423014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.423021 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.423081 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.451561 1498704 cri.go:89] found id: ""
	I1217 02:10:48.451588 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.451598 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.451605 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:48.451667 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:48.477573 1498704 cri.go:89] found id: ""
	I1217 02:10:48.477597 1498704 logs.go:282] 0 containers: []
	W1217 02:10:48.477607 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:48.477616 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:48.477627 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:48.503190 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:48.503227 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:48.531901 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:48.531927 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:48.590637 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:48.590670 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:48.606410 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:48.606441 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:48.698001 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:48.689453   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.690595   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692088   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.692610   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:48.694141   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.198775 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:51.210128 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:51.210207 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:51.239455 1498704 cri.go:89] found id: ""
	I1217 02:10:51.239482 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.239491 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:51.239504 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:51.239587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:51.265468 1498704 cri.go:89] found id: ""
	I1217 02:10:51.265541 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.265565 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:51.265583 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:51.265684 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:51.290269 1498704 cri.go:89] found id: ""
	I1217 02:10:51.290294 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.290303 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:51.290310 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:51.290403 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:51.315672 1498704 cri.go:89] found id: ""
	I1217 02:10:51.315697 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.315706 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:51.315712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:51.315775 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:51.345852 1498704 cri.go:89] found id: ""
	I1217 02:10:51.345922 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.345938 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:51.345945 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:51.346021 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:51.374855 1498704 cri.go:89] found id: ""
	I1217 02:10:51.374884 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.374892 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:51.374899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:51.374967 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:51.408516 1498704 cri.go:89] found id: ""
	I1217 02:10:51.408553 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.408563 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:51.408569 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:51.408636 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:51.443401 1498704 cri.go:89] found id: ""
	I1217 02:10:51.443428 1498704 logs.go:282] 0 containers: []
	W1217 02:10:51.443436 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:51.443445 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:51.443474 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:51.499872 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:51.499907 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:51.514690 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:51.514759 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:51.581421 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:51.573065   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.573700   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.575403   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.576080   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:51.577582   12203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:51.581455 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:51.581470 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:51.606921 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:51.606964 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.151396 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:54.162403 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:54.162479 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:54.188307 1498704 cri.go:89] found id: ""
	I1217 02:10:54.188331 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.188340 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:54.188347 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:54.188411 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:54.222781 1498704 cri.go:89] found id: ""
	I1217 02:10:54.222803 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.222818 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:54.222824 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:54.222886 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:54.251344 1498704 cri.go:89] found id: ""
	I1217 02:10:54.251415 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.251439 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:54.251451 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:54.251535 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:54.280867 1498704 cri.go:89] found id: ""
	I1217 02:10:54.280889 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.280898 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:54.280904 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:54.280966 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:54.306150 1498704 cri.go:89] found id: ""
	I1217 02:10:54.306177 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.306185 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:54.306192 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:54.306250 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:54.330272 1498704 cri.go:89] found id: ""
	I1217 02:10:54.330296 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.330310 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:54.330317 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:54.330375 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:54.359393 1498704 cri.go:89] found id: ""
	I1217 02:10:54.359423 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.359431 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:54.359438 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:54.359525 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:54.392745 1498704 cri.go:89] found id: ""
	I1217 02:10:54.392780 1498704 logs.go:282] 0 containers: []
	W1217 02:10:54.392804 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:54.392822 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:54.392835 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:54.469149 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:54.460070   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.460755   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462299   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.462877   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:54.464624   12310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:54.469171 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:54.469185 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:10:54.495699 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:54.495738 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:54.524004 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:54.524031 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:54.579558 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:54.579592 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.095655 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:57.106067 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:57.106145 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:57.130932 1498704 cri.go:89] found id: ""
	I1217 02:10:57.130961 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.130970 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:57.130976 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:10:57.131046 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:57.160073 1498704 cri.go:89] found id: ""
	I1217 02:10:57.160098 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.160107 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:10:57.160113 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:10:57.160173 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:57.184768 1498704 cri.go:89] found id: ""
	I1217 02:10:57.184793 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.184802 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:10:57.184808 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:57.184867 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:57.210332 1498704 cri.go:89] found id: ""
	I1217 02:10:57.210358 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.210367 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:57.210374 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:57.210457 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:57.234920 1498704 cri.go:89] found id: ""
	I1217 02:10:57.234984 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.234999 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:57.235007 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:57.235072 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:57.260151 1498704 cri.go:89] found id: ""
	I1217 02:10:57.260183 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.260193 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:57.260201 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:57.260310 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:57.287966 1498704 cri.go:89] found id: ""
	I1217 02:10:57.288000 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.288009 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:57.288032 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:10:57.288115 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:10:57.312191 1498704 cri.go:89] found id: ""
	I1217 02:10:57.312252 1498704 logs.go:282] 0 containers: []
	W1217 02:10:57.312284 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:57.312306 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:10:57.312330 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:57.344168 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:57.344196 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:57.400635 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:57.400672 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:57.416567 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:57.416594 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:57.485990 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:57.478006   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.478609   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480125   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.480618   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:57.482100   12443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:57.486013 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:10:57.486028 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.011650 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:00.083065 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:00.083205 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:00.177092 1498704 cri.go:89] found id: ""
	I1217 02:11:00.177120 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.177129 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:00.177137 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:00.177210 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:00.240557 1498704 cri.go:89] found id: ""
	I1217 02:11:00.240645 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.240670 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:00.240689 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:00.240818 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:00.290983 1498704 cri.go:89] found id: ""
	I1217 02:11:00.291075 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.291101 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:00.291120 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:00.291245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:00.339816 1498704 cri.go:89] found id: ""
	I1217 02:11:00.339906 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.339935 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:00.339955 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:00.340060 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:00.400482 1498704 cri.go:89] found id: ""
	I1217 02:11:00.400508 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.400516 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:00.400525 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:00.400594 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:00.437316 1498704 cri.go:89] found id: ""
	I1217 02:11:00.437386 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.437413 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:00.437432 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:00.437531 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:00.464791 1498704 cri.go:89] found id: ""
	I1217 02:11:00.464859 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.464881 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:00.464899 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:00.464986 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:00.492400 1498704 cri.go:89] found id: ""
	I1217 02:11:00.492468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:00.492492 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:00.492514 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:00.492551 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:00.549202 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:00.549237 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:00.564046 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:00.564073 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:00.636379 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:00.622995   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.626231   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630023   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.630666   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:00.632491   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:00.636409 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:00.636423 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:00.666039 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:00.666076 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.197992 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:03.209540 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:03.209610 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:03.237337 1498704 cri.go:89] found id: ""
	I1217 02:11:03.237411 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.237436 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:03.237458 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:03.237545 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:03.262191 1498704 cri.go:89] found id: ""
	I1217 02:11:03.262213 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.262221 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:03.262228 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:03.262286 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:03.286816 1498704 cri.go:89] found id: ""
	I1217 02:11:03.286840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.286850 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:03.286856 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:03.286915 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:03.310933 1498704 cri.go:89] found id: ""
	I1217 02:11:03.311007 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.311023 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:03.311031 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:03.311089 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:03.334605 1498704 cri.go:89] found id: ""
	I1217 02:11:03.334628 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.334637 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:03.334643 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:03.334701 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:03.359646 1498704 cri.go:89] found id: ""
	I1217 02:11:03.359681 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.359690 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:03.359697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:03.359789 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:03.391919 1498704 cri.go:89] found id: ""
	I1217 02:11:03.391946 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.391955 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:03.391962 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:03.392025 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:03.419543 1498704 cri.go:89] found id: ""
	I1217 02:11:03.419567 1498704 logs.go:282] 0 containers: []
	W1217 02:11:03.419576 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:03.419586 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:03.419600 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:03.455897 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:03.455925 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:03.512216 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:03.512255 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:03.527344 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:03.527372 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:03.591374 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:03.582628   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.583422   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585195   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.585875   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:03.587387   12664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:03.591396 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:03.591408 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.117735 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:06.128394 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:06.128466 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:06.155397 1498704 cri.go:89] found id: ""
	I1217 02:11:06.155420 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.155430 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:06.155436 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:06.155669 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:06.185554 1498704 cri.go:89] found id: ""
	I1217 02:11:06.185631 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.185682 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:06.185697 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:06.185769 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:06.214540 1498704 cri.go:89] found id: ""
	I1217 02:11:06.214564 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.214573 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:06.214579 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:06.214637 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:06.240468 1498704 cri.go:89] found id: ""
	I1217 02:11:06.240492 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.240501 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:06.240507 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:06.240570 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:06.266674 1498704 cri.go:89] found id: ""
	I1217 02:11:06.266697 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.266706 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:06.266712 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:06.266781 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:06.292194 1498704 cri.go:89] found id: ""
	I1217 02:11:06.292218 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.292227 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:06.292233 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:06.292295 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:06.320979 1498704 cri.go:89] found id: ""
	I1217 02:11:06.321002 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.321011 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:06.321017 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:06.321074 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:06.347269 1498704 cri.go:89] found id: ""
	I1217 02:11:06.347294 1498704 logs.go:282] 0 containers: []
	W1217 02:11:06.347303 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:06.347315 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:06.347326 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:06.409046 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:06.409101 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:06.425379 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:06.425406 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.490322 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.481486   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.482062   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.483580   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.484109   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.485617   12765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.490345 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:06.490357 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:06.515786 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:06.515825 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.043785 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:09.054506 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:09.054580 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:09.079819 1498704 cri.go:89] found id: ""
	I1217 02:11:09.079848 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.079856 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:09.079862 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:09.079921 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:09.104928 1498704 cri.go:89] found id: ""
	I1217 02:11:09.104953 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.104963 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:09.104969 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:09.105031 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:09.130212 1498704 cri.go:89] found id: ""
	I1217 02:11:09.130238 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.130246 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:09.130255 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:09.130358 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:09.159130 1498704 cri.go:89] found id: ""
	I1217 02:11:09.159153 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.159162 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:09.159169 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:09.159245 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:09.184267 1498704 cri.go:89] found id: ""
	I1217 02:11:09.184292 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.184301 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:09.184307 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:09.184371 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:09.209170 1498704 cri.go:89] found id: ""
	I1217 02:11:09.209195 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.209204 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:09.209210 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:09.209271 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:09.235842 1498704 cri.go:89] found id: ""
	I1217 02:11:09.235869 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.235878 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:09.235884 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:09.235946 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:09.265413 1498704 cri.go:89] found id: ""
	I1217 02:11:09.265445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:09.265454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:09.265463 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.265475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:09.302759 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:09.302784 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:09.358361 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:09.358394 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:09.378248 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:09.378278 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.451227 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.442210   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.443081   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.444825   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.445191   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.446569   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.451247 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:09.451260 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:11.977784 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.988725 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:11.988798 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:12.015755 1498704 cri.go:89] found id: ""
	I1217 02:11:12.015778 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.015788 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:12.015795 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:12.015866 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:12.042225 1498704 cri.go:89] found id: ""
	I1217 02:11:12.042250 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.042259 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:12.042269 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:12.042328 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:12.067951 1498704 cri.go:89] found id: ""
	I1217 02:11:12.067977 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.067987 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:12.067993 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:12.068054 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:12.094539 1498704 cri.go:89] found id: ""
	I1217 02:11:12.094565 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.094574 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:12.094580 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:12.094641 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:12.120422 1498704 cri.go:89] found id: ""
	I1217 02:11:12.120445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.120454 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:12.120461 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:12.120521 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:12.146437 1498704 cri.go:89] found id: ""
	I1217 02:11:12.146465 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.146491 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:12.146498 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:12.146560 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:12.171817 1498704 cri.go:89] found id: ""
	I1217 02:11:12.171840 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.171849 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:12.171855 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:12.171914 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:12.200987 1498704 cri.go:89] found id: ""
	I1217 02:11:12.201013 1498704 logs.go:282] 0 containers: []
	W1217 02:11:12.201022 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:12.201031 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.201043 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:12.232701 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:12.232731 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.288687 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.288722 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.303401 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.303479 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.371087 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.360792   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.361726   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363285   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.363683   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.365149   13002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.371112 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:12.371125 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:14.899732 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.913037 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:14.913112 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:14.939368 1498704 cri.go:89] found id: ""
	I1217 02:11:14.939399 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.939408 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.939415 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:14.939476 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:14.964809 1498704 cri.go:89] found id: ""
	I1217 02:11:14.964835 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.964844 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.964849 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:14.964911 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:14.992442 1498704 cri.go:89] found id: ""
	I1217 02:11:14.992468 1498704 logs.go:282] 0 containers: []
	W1217 02:11:14.992477 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.992483 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:14.992542 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:15.029492 1498704 cri.go:89] found id: ""
	I1217 02:11:15.029518 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.029527 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:15.029534 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:15.029604 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:15.059736 1498704 cri.go:89] found id: ""
	I1217 02:11:15.059760 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.059770 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:15.059776 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:15.059841 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:15.086908 1498704 cri.go:89] found id: ""
	I1217 02:11:15.086991 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.087014 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:15.087029 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:15.087104 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:15.113800 1498704 cri.go:89] found id: ""
	I1217 02:11:15.113829 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.113838 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.113844 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:15.113903 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:15.140421 1498704 cri.go:89] found id: ""
	I1217 02:11:15.140445 1498704 logs.go:282] 0 containers: []
	W1217 02:11:15.140454 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.140463 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.140475 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.197971 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.198003 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.213157 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.213186 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.278282 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.270003   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.270647   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272215   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.272503   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.274140   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.278303 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:15.278316 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:15.303867 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.303900 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.833800 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.844470 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:11:17.844546 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:11:17.871228 1498704 cri.go:89] found id: ""
	I1217 02:11:17.871254 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.871262 1498704 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.871270 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 02:11:17.871345 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:11:17.909403 1498704 cri.go:89] found id: ""
	I1217 02:11:17.909430 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.909438 1498704 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.909444 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 02:11:17.909505 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:11:17.942319 1498704 cri.go:89] found id: ""
	I1217 02:11:17.942341 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.942348 1498704 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.942355 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:11:17.942416 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:11:17.967521 1498704 cri.go:89] found id: ""
	I1217 02:11:17.967546 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.967554 1498704 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.967561 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:11:17.967619 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:11:17.995465 1498704 cri.go:89] found id: ""
	I1217 02:11:17.995488 1498704 logs.go:282] 0 containers: []
	W1217 02:11:17.995518 1498704 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:17.995526 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:11:17.995587 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:11:18.023559 1498704 cri.go:89] found id: ""
	I1217 02:11:18.023587 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.023596 1498704 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.023603 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 02:11:18.023664 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:11:18.049983 1498704 cri.go:89] found id: ""
	I1217 02:11:18.050011 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.050027 1498704 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.050033 1498704 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 02:11:18.050096 1498704 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 02:11:18.081999 1498704 cri.go:89] found id: ""
	I1217 02:11:18.082023 1498704 logs.go:282] 0 containers: []
	W1217 02:11:18.082033 1498704 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.082042 1498704 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.082054 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.096662 1498704 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.096692 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.160156 1498704 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.151288   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.152070   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154015   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.154605   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.156164   13219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.160179 1498704 logs.go:123] Gathering logs for containerd ...
	I1217 02:11:18.160192 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 02:11:18.185291 1498704 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.185325 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.216271 1498704 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.216298 1498704 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.775311 1498704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:20.789631 1498704 out.go:203] 
	W1217 02:11:20.792902 1498704 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:11:20.792939 1498704 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:11:20.792950 1498704 out.go:285] * Related issues:
	W1217 02:11:20.792967 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:11:20.792986 1498704 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:11:20.795906 1498704 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212356563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212424346Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212528511Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212600537Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212667581Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212731344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212789486Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212848654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.212916946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213001919Z" level=info msg="Connect containerd service"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.213359100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.214132836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224058338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224260137Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.224195259Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.233004319Z" level=info msg="Start recovering state"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.265931194Z" level=info msg="Start event monitor"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266119036Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266183250Z" level=info msg="Start streaming server"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266253167Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266318809Z" level=info msg="runtime interface starting up..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266375187Z" level=info msg="starting plugins..."
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.266454539Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:05:19 newest-cni-456492 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:05:19 newest-cni-456492 containerd[555]: time="2025-12-17T02:05:19.268086737Z" level=info msg="containerd successfully booted in 0.090817s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.883521   13885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.884305   13885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.885931   13885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.886344   13885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.888005   13885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:11:33 up  7:54,  0 user,  load average: 0.50, 0.71, 1.20
	Linux newest-cni-456492 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:11:30 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:31 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 17 02:11:31 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:31 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:31 newest-cni-456492 kubelet[13750]: E1217 02:11:31.445095   13750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:31 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:31 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:32 newest-cni-456492 kubelet[13770]: E1217 02:11:32.193837   13770 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:32 newest-cni-456492 kubelet[13790]: E1217 02:11:32.935612   13790 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:32 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:33 newest-cni-456492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 17 02:11:33 newest-cni-456492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:33 newest-cni-456492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:33 newest-cni-456492 kubelet[13831]: E1217 02:11:33.685323   13831 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:33 newest-cni-456492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:33 newest-cni-456492 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-456492 -n newest-cni-456492: exit status 2 (346.665613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-456492" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (291.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:18:50.572856 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.579305 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.590693 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.612096 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.653593 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.735097 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:50.896648 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:18:51.218001 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:51.859667 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:51.982122 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:18:53.141192 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:18:55.702516 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:19:00.824055 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:19:11.065620 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:19:31.547900 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:19:50.423122 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:09.434041 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:12.509442 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:13.904698 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1217 02:20:27.144847 1211243 config.go:182] Loaded profile config "flannel-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:35.917865 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/old-k8s-version-859530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:37.657106 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.663503 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.674921 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.696301 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.737784 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.819210 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:37.980712 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:38.302779 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:20:38.944749 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:40.227052 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:42.788777 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:20:47.910533 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:21:18.635073 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:21:33.441739 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/default-k8s-diff-port-069646/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:21:34.430862 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kindnet-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:21:39.952701 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:21:59.596979 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:12.632473 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.639005 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.650363 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.672754 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.714840 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.796186 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:12.957882 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:13.280217 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:22:13.922202 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:15.204123 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:22.887633 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:30.040609 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:33.129997 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:53.611507 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 02:22:57.746900 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/auto-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 2 (317.495454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-178365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-178365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.682µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-178365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-178365
helpers_test.go:244: (dbg) docker inspect no-preload-178365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	        "Created": "2025-12-17T01:53:10.849194081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1494487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:03:06.71743355Z",
	            "FinishedAt": "2025-12-17T02:03:05.348756992Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/hosts",
	        "LogPath": "/var/lib/docker/containers/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2/e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2-json.log",
	        "Name": "/no-preload-178365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-178365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6847d19136fc81b9325de7e6ab0c17c59ddeb26284851661b0461244c6addd2",
	                "LowerDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5d391ef2a5d8651d46aa64e145baf142912a850e91c938f5a6b52e7cab48acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178365",
	                "Source": "/var/lib/docker/volumes/no-preload-178365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178365",
	                "name.minikube.sigs.k8s.io": "no-preload-178365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9255e0863872038f878a0377593d952443e5d8a7e0d1715541fab06d752ef770",
	            "SandboxKey": "/var/run/docker/netns/9255e0863872",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-178365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9e:f4:59:45:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66fbd2b458ffd906b78a053bb9c1b508472bd7023ef3e155390d7a54357cf224",
	                    "EndpointID": "02e66a97e08a8d712f4ba9f711db1ac614b5e96335d8aceb3d7eccb7c2a2e478",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178365",
	                        "e6847d19136f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 2 (317.086482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178365 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-721629 sudo iptables -t nat -L -n -v                                 │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl status kubelet --all --full --no-pager         │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl cat kubelet --no-pager                         │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl status docker --all --full --no-pager          │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo systemctl cat docker --no-pager                          │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /etc/docker/daemon.json                              │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo docker system info                                       │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo systemctl cat cri-docker --no-pager                      │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cri-dockerd --version                                    │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl status containerd --all --full --no-pager      │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl cat containerd --no-pager                      │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /lib/systemd/system/containerd.service               │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo cat /etc/containerd/config.toml                          │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo containerd config dump                                   │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo systemctl status crio --all --full --no-pager            │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │                     │
	│ ssh     │ -p bridge-721629 sudo systemctl cat crio --no-pager                            │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ ssh     │ -p bridge-721629 sudo crio config                                              │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	│ delete  │ -p bridge-721629                                                               │ bridge-721629 │ jenkins │ v1.37.0 │ 17 Dec 25 02:22 UTC │ 17 Dec 25 02:22 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:20:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:20:58.147855 1559626 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:20:58.147990 1559626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:20:58.148000 1559626 out.go:374] Setting ErrFile to fd 2...
	I1217 02:20:58.148005 1559626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:20:58.148251 1559626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 02:20:58.148702 1559626 out.go:368] Setting JSON to false
	I1217 02:20:58.149578 1559626 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29009,"bootTime":1765909050,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 02:20:58.149681 1559626 start.go:143] virtualization:  
	I1217 02:20:58.156529 1559626 out.go:179] * [bridge-721629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:20:58.160215 1559626 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:20:58.160343 1559626 notify.go:221] Checking for updates...
	I1217 02:20:58.166775 1559626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:20:58.169903 1559626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:20:58.173156 1559626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 02:20:58.176384 1559626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:20:58.179544 1559626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:20:58.183138 1559626 config.go:182] Loaded profile config "no-preload-178365": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 02:20:58.183244 1559626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:20:58.214267 1559626 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:20:58.214402 1559626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:20:58.270354 1559626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:20:58.260819939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:20:58.270456 1559626 docker.go:319] overlay module found
	I1217 02:20:58.273705 1559626 out.go:179] * Using the docker driver based on user configuration
	I1217 02:20:58.276673 1559626 start.go:309] selected driver: docker
	I1217 02:20:58.276693 1559626 start.go:927] validating driver "docker" against <nil>
	I1217 02:20:58.276707 1559626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:20:58.277429 1559626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:20:58.338102 1559626 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:20:58.323895101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:20:58.338264 1559626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 02:20:58.338480 1559626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:20:58.341534 1559626 out.go:179] * Using Docker driver with root privileges
	I1217 02:20:58.344440 1559626 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:20:58.344465 1559626 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 02:20:58.344538 1559626 start.go:353] cluster config:
	{Name:bridge-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:20:58.347685 1559626 out.go:179] * Starting "bridge-721629" primary control-plane node in "bridge-721629" cluster
	I1217 02:20:58.350550 1559626 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 02:20:58.353530 1559626 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:20:58.356598 1559626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:20:58.356775 1559626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:20:58.356803 1559626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1217 02:20:58.356816 1559626 cache.go:65] Caching tarball of preloaded images
	I1217 02:20:58.356883 1559626 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 02:20:58.356898 1559626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1217 02:20:58.357002 1559626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/config.json ...
	I1217 02:20:58.357025 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/config.json: {Name:mk46729cb1a3dc1ede01f9154ed2c7c0048cfbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:20:58.385411 1559626 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:20:58.385435 1559626 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:20:58.385452 1559626 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:20:58.385483 1559626 start.go:360] acquireMachinesLock for bridge-721629: {Name:mk1a89cc01f5c5509d22874bbdae537d4714060e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:20:58.385590 1559626 start.go:364] duration metric: took 88.198µs to acquireMachinesLock for "bridge-721629"
	I1217 02:20:58.385619 1559626 start.go:93] Provisioning new machine with config: &{Name:bridge-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-721629 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:20:58.385770 1559626 start.go:125] createHost starting for "" (driver="docker")
	I1217 02:20:58.389317 1559626 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 02:20:58.389550 1559626 start.go:159] libmachine.API.Create for "bridge-721629" (driver="docker")
	I1217 02:20:58.389583 1559626 client.go:173] LocalClient.Create starting
	I1217 02:20:58.389685 1559626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
	I1217 02:20:58.389723 1559626 main.go:143] libmachine: Decoding PEM data...
	I1217 02:20:58.389748 1559626 main.go:143] libmachine: Parsing certificate...
	I1217 02:20:58.389799 1559626 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
	I1217 02:20:58.389819 1559626 main.go:143] libmachine: Decoding PEM data...
	I1217 02:20:58.389834 1559626 main.go:143] libmachine: Parsing certificate...
	I1217 02:20:58.390217 1559626 cli_runner.go:164] Run: docker network inspect bridge-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 02:20:58.409010 1559626 cli_runner.go:211] docker network inspect bridge-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 02:20:58.409087 1559626 network_create.go:284] running [docker network inspect bridge-721629] to gather additional debugging logs...
	I1217 02:20:58.409103 1559626 cli_runner.go:164] Run: docker network inspect bridge-721629
	W1217 02:20:58.433786 1559626 cli_runner.go:211] docker network inspect bridge-721629 returned with exit code 1
	I1217 02:20:58.433817 1559626 network_create.go:287] error running [docker network inspect bridge-721629]: docker network inspect bridge-721629: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-721629 not found
	I1217 02:20:58.433841 1559626 network_create.go:289] output of [docker network inspect bridge-721629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-721629 not found
	
	** /stderr **
	I1217 02:20:58.433938 1559626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:20:58.451230 1559626 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
	I1217 02:20:58.451564 1559626 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2ed269c07853 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f6:69:e2:30:61} reservation:<nil>}
	I1217 02:20:58.451892 1559626 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e7c64c11fb3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:f0:d4:83:34:ca} reservation:<nil>}
	I1217 02:20:58.452117 1559626 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66fbd2b458ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:50:1f:6f:b2:3d} reservation:<nil>}
	I1217 02:20:58.452529 1559626 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019df570}
	I1217 02:20:58.452557 1559626 network_create.go:124] attempt to create docker network bridge-721629 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 02:20:58.452613 1559626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-721629 bridge-721629
	I1217 02:20:58.509251 1559626 network_create.go:108] docker network bridge-721629 192.168.85.0/24 created
	I1217 02:20:58.509283 1559626 kic.go:121] calculated static IP "192.168.85.2" for the "bridge-721629" container
	I1217 02:20:58.509368 1559626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 02:20:58.525915 1559626 cli_runner.go:164] Run: docker volume create bridge-721629 --label name.minikube.sigs.k8s.io=bridge-721629 --label created_by.minikube.sigs.k8s.io=true
	I1217 02:20:58.548455 1559626 oci.go:103] Successfully created a docker volume bridge-721629
	I1217 02:20:58.548565 1559626 cli_runner.go:164] Run: docker run --rm --name bridge-721629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-721629 --entrypoint /usr/bin/test -v bridge-721629:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 02:20:59.079400 1559626 oci.go:107] Successfully prepared a docker volume bridge-721629
	I1217 02:20:59.079479 1559626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:20:59.079493 1559626 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 02:20:59.079580 1559626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-721629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 02:21:03.450630 1559626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-721629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.371007973s)
	I1217 02:21:03.450661 1559626 kic.go:203] duration metric: took 4.371165734s to extract preloaded images to volume ...
	W1217 02:21:03.450804 1559626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 02:21:03.450917 1559626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 02:21:03.506125 1559626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-721629 --name bridge-721629 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-721629 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-721629 --network bridge-721629 --ip 192.168.85.2 --volume bridge-721629:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 02:21:03.819415 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Running}}
	I1217 02:21:03.840382 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:03.867682 1559626 cli_runner.go:164] Run: docker exec bridge-721629 stat /var/lib/dpkg/alternatives/iptables
	I1217 02:21:03.916862 1559626 oci.go:144] the created container "bridge-721629" has a running status.
	I1217 02:21:03.916899 1559626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa...
	I1217 02:21:04.047329 1559626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 02:21:04.073880 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:04.096408 1559626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 02:21:04.096428 1559626 kic_runner.go:114] Args: [docker exec --privileged bridge-721629 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 02:21:04.169336 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:04.189361 1559626 machine.go:94] provisionDockerMachine start ...
	I1217 02:21:04.189463 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:04.221796 1559626 main.go:143] libmachine: Using SSH client type: native
	I1217 02:21:04.222126 1559626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1217 02:21:04.222136 1559626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:21:04.222713 1559626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56108->127.0.0.1:34294: read: connection reset by peer
	I1217 02:21:07.353042 1559626 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-721629
	
	I1217 02:21:07.353068 1559626 ubuntu.go:182] provisioning hostname "bridge-721629"
	I1217 02:21:07.353139 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:07.370276 1559626 main.go:143] libmachine: Using SSH client type: native
	I1217 02:21:07.370598 1559626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1217 02:21:07.370609 1559626 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-721629 && echo "bridge-721629" | sudo tee /etc/hostname
	I1217 02:21:07.523404 1559626 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-721629
	
	I1217 02:21:07.523571 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:07.541742 1559626 main.go:143] libmachine: Using SSH client type: native
	I1217 02:21:07.542053 1559626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1217 02:21:07.542077 1559626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-721629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-721629/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-721629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:21:07.677921 1559626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:21:07.677990 1559626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
	I1217 02:21:07.678017 1559626 ubuntu.go:190] setting up certificates
	I1217 02:21:07.678036 1559626 provision.go:84] configureAuth start
	I1217 02:21:07.678128 1559626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-721629
	I1217 02:21:07.695040 1559626 provision.go:143] copyHostCerts
	I1217 02:21:07.695110 1559626 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
	I1217 02:21:07.695124 1559626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
	I1217 02:21:07.695201 1559626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
	I1217 02:21:07.695303 1559626 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
	I1217 02:21:07.695312 1559626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
	I1217 02:21:07.695340 1559626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
	I1217 02:21:07.695394 1559626 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
	I1217 02:21:07.695401 1559626 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
	I1217 02:21:07.695425 1559626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
	I1217 02:21:07.695520 1559626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.bridge-721629 san=[127.0.0.1 192.168.85.2 bridge-721629 localhost minikube]
	I1217 02:21:07.849510 1559626 provision.go:177] copyRemoteCerts
	I1217 02:21:07.849580 1559626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:21:07.849709 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:07.867320 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:07.961631 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:21:07.979813 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 02:21:07.998643 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:21:08.022056 1559626 provision.go:87] duration metric: took 344.006661ms to configureAuth
	I1217 02:21:08.022131 1559626 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:21:08.022380 1559626 config.go:182] Loaded profile config "bridge-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 02:21:08.022396 1559626 machine.go:97] duration metric: took 3.83301592s to provisionDockerMachine
	I1217 02:21:08.022403 1559626 client.go:176] duration metric: took 9.632813724s to LocalClient.Create
	I1217 02:21:08.022424 1559626 start.go:167] duration metric: took 9.63287642s to libmachine.API.Create "bridge-721629"
	I1217 02:21:08.022433 1559626 start.go:293] postStartSetup for "bridge-721629" (driver="docker")
	I1217 02:21:08.022442 1559626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:21:08.022513 1559626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:21:08.022568 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:08.041586 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:08.142025 1559626 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:21:08.146307 1559626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:21:08.146336 1559626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:21:08.146349 1559626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
	I1217 02:21:08.146420 1559626 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
	I1217 02:21:08.146499 1559626 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
	I1217 02:21:08.146625 1559626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:21:08.162846 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:21:08.189859 1559626 start.go:296] duration metric: took 167.411306ms for postStartSetup
	I1217 02:21:08.190226 1559626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-721629
	I1217 02:21:08.207106 1559626 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/config.json ...
	I1217 02:21:08.207394 1559626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:21:08.207451 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:08.227046 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:08.326738 1559626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:21:08.331694 1559626 start.go:128] duration metric: took 9.945906622s to createHost
	I1217 02:21:08.331721 1559626 start.go:83] releasing machines lock for "bridge-721629", held for 9.946118423s
	I1217 02:21:08.331802 1559626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-721629
	I1217 02:21:08.348829 1559626 ssh_runner.go:195] Run: cat /version.json
	I1217 02:21:08.348885 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:08.349141 1559626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:21:08.349213 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:08.370217 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:08.389366 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:08.465468 1559626 ssh_runner.go:195] Run: systemctl --version
	I1217 02:21:08.560952 1559626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:21:08.565509 1559626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:21:08.565607 1559626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:21:08.593527 1559626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 02:21:08.593602 1559626 start.go:496] detecting cgroup driver to use...
	I1217 02:21:08.593673 1559626 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:21:08.593755 1559626 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 02:21:08.609171 1559626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:21:08.622453 1559626 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:21:08.622545 1559626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:21:08.640575 1559626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:21:08.660106 1559626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:21:08.784380 1559626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:21:08.925496 1559626 docker.go:234] disabling docker service ...
	I1217 02:21:08.925568 1559626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:21:08.949326 1559626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:21:08.962978 1559626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:21:09.075057 1559626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:21:09.187502 1559626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:21:09.201760 1559626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:21:09.216366 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:21:09.225371 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:21:09.234875 1559626 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:21:09.235028 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:21:09.244099 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:21:09.252795 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:21:09.261553 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:21:09.270639 1559626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:21:09.278707 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:21:09.287854 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:21:09.297207 1559626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:21:09.306604 1559626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:21:09.314360 1559626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:21:09.322811 1559626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:21:09.436530 1559626 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:21:09.555948 1559626 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 02:21:09.556034 1559626 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 02:21:09.559849 1559626 start.go:564] Will wait 60s for crictl version
	I1217 02:21:09.559913 1559626 ssh_runner.go:195] Run: which crictl
	I1217 02:21:09.563249 1559626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:21:09.586617 1559626 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 02:21:09.586685 1559626 ssh_runner.go:195] Run: containerd --version
	I1217 02:21:09.606241 1559626 ssh_runner.go:195] Run: containerd --version
	I1217 02:21:09.639475 1559626 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1217 02:21:09.642456 1559626 cli_runner.go:164] Run: docker network inspect bridge-721629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:21:09.659460 1559626 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:21:09.663403 1559626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:21:09.673543 1559626 kubeadm.go:884] updating cluster {Name:bridge-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:21:09.673680 1559626 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1217 02:21:09.673750 1559626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:21:09.700567 1559626 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:21:09.700594 1559626 containerd.go:534] Images already preloaded, skipping extraction
	I1217 02:21:09.700653 1559626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:21:09.727519 1559626 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 02:21:09.727543 1559626 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:21:09.727552 1559626 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1217 02:21:09.727641 1559626 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-721629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1217 02:21:09.727707 1559626 ssh_runner.go:195] Run: sudo crictl info
	I1217 02:21:09.753558 1559626 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:21:09.753592 1559626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:21:09.753614 1559626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-721629 NodeName:bridge-721629 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:21:09.753754 1559626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-721629"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:21:09.753826 1559626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 02:21:09.761468 1559626 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:21:09.761541 1559626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:21:09.769305 1559626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1217 02:21:09.782778 1559626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 02:21:09.796385 1559626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1217 02:21:09.809819 1559626 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:21:09.813433 1559626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:21:09.823346 1559626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:21:09.941438 1559626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:21:09.957558 1559626 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629 for IP: 192.168.85.2
	I1217 02:21:09.957578 1559626 certs.go:195] generating shared ca certs ...
	I1217 02:21:09.957594 1559626 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:09.957755 1559626 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
	I1217 02:21:09.957813 1559626 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
	I1217 02:21:09.957829 1559626 certs.go:257] generating profile certs ...
	I1217 02:21:09.957903 1559626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.key
	I1217 02:21:09.957924 1559626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.crt with IP's: []
	I1217 02:21:10.056819 1559626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.crt ...
	I1217 02:21:10.056855 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.crt: {Name:mk7cab7904ac1fe2093dc2b4ff997e7109961981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.057075 1559626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.key ...
	I1217 02:21:10.057089 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/client.key: {Name:mke00ceac25854c1bc416c0befcafb959a21a8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.057196 1559626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key.9b52703a
	I1217 02:21:10.057214 1559626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt.9b52703a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 02:21:10.109889 1559626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt.9b52703a ...
	I1217 02:21:10.109918 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt.9b52703a: {Name:mkab300fcc496a230c27d275f981e1a922aa4f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.110096 1559626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key.9b52703a ...
	I1217 02:21:10.110112 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key.9b52703a: {Name:mk04247afe888acfadbf2a9301f93eb1c7c21197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.110190 1559626 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt.9b52703a -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt
	I1217 02:21:10.110273 1559626 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key.9b52703a -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key
	I1217 02:21:10.110334 1559626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.key
	I1217 02:21:10.110352 1559626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.crt with IP's: []
	I1217 02:21:10.653807 1559626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.crt ...
	I1217 02:21:10.653840 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.crt: {Name:mk4aed379dbb29e1f79073d3939ab9a08baf6838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.654030 1559626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.key ...
	I1217 02:21:10.654047 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.key: {Name:mk0c03d98a07806ebce7a0354ccc87834d00d4ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:10.654225 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
	W1217 02:21:10.654273 1559626 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
	I1217 02:21:10.654287 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:21:10.654314 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:21:10.654342 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:21:10.654371 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
	I1217 02:21:10.654422 1559626 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
	I1217 02:21:10.655014 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:21:10.674327 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 02:21:10.694111 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:21:10.713831 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 02:21:10.734743 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 02:21:10.753731 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:21:10.773047 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:21:10.792506 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/bridge-721629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:21:10.811310 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:21:10.830078 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
	I1217 02:21:10.848415 1559626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
	I1217 02:21:10.866354 1559626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:21:10.879296 1559626 ssh_runner.go:195] Run: openssl version
	I1217 02:21:10.885750 1559626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:21:10.893320 1559626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:21:10.900775 1559626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:21:10.904459 1559626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:21:10.904525 1559626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:21:10.945847 1559626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:21:10.953465 1559626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 02:21:10.960926 1559626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
	I1217 02:21:10.968613 1559626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
	I1217 02:21:10.976132 1559626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
	I1217 02:21:10.980017 1559626 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
	I1217 02:21:10.980082 1559626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
	I1217 02:21:11.022082 1559626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:21:11.029880 1559626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
	I1217 02:21:11.037456 1559626 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
	I1217 02:21:11.045221 1559626 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
	I1217 02:21:11.053005 1559626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
	I1217 02:21:11.057088 1559626 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
	I1217 02:21:11.057202 1559626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
	I1217 02:21:11.099276 1559626 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:21:11.107512 1559626 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
	I1217 02:21:11.115547 1559626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:21:11.119701 1559626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 02:21:11.119807 1559626 kubeadm.go:401] StartCluster: {Name:bridge-721629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-721629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:21:11.119902 1559626 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 02:21:11.120008 1559626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:21:11.158398 1559626 cri.go:89] found id: ""
	I1217 02:21:11.158546 1559626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:21:11.171128 1559626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 02:21:11.179242 1559626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:21:11.179351 1559626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:21:11.188733 1559626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:21:11.188804 1559626 kubeadm.go:158] found existing configuration files:
	
	I1217 02:21:11.188869 1559626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:21:11.198157 1559626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:21:11.198269 1559626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:21:11.205919 1559626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:21:11.214488 1559626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:21:11.214577 1559626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:21:11.222269 1559626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:21:11.230114 1559626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:21:11.230181 1559626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:21:11.237801 1559626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:21:11.245753 1559626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:21:11.245818 1559626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:21:11.253157 1559626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:21:11.292862 1559626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 02:21:11.299189 1559626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:21:11.326704 1559626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:21:11.326876 1559626 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 02:21:11.326945 1559626 kubeadm.go:319] OS: Linux
	I1217 02:21:11.327026 1559626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:21:11.327110 1559626 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:21:11.327190 1559626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:21:11.327272 1559626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:21:11.327353 1559626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:21:11.327475 1559626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:21:11.327563 1559626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:21:11.327650 1559626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:21:11.327710 1559626 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:21:11.398658 1559626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:21:11.398832 1559626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:21:11.398952 1559626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:21:11.407043 1559626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:21:11.413292 1559626 out.go:252]   - Generating certificates and keys ...
	I1217 02:21:11.413468 1559626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:21:11.413582 1559626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:21:12.223416 1559626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:21:12.722414 1559626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:21:13.131728 1559626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:21:13.975420 1559626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:21:14.484084 1559626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:21:14.484442 1559626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-721629 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 02:21:15.046299 1559626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:21:15.046700 1559626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-721629 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 02:21:15.553927 1559626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:21:16.271151 1559626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:21:16.755710 1559626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:21:16.755990 1559626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:21:17.067100 1559626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:21:18.070965 1559626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:21:18.374141 1559626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:21:18.568823 1559626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:21:18.878301 1559626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:21:18.879210 1559626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:21:18.882155 1559626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:21:18.885865 1559626 out.go:252]   - Booting up control plane ...
	I1217 02:21:18.886007 1559626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:21:18.886111 1559626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:21:18.886384 1559626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:21:18.912067 1559626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:21:18.912186 1559626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:21:18.920064 1559626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:21:18.920465 1559626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:21:18.920680 1559626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:21:19.070153 1559626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:21:19.070280 1559626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:21:20.574067 1559626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500639526s
	I1217 02:21:20.574233 1559626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 02:21:20.574319 1559626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 02:21:20.574414 1559626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 02:21:20.574535 1559626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 02:21:25.698165 1559626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.124213876s
	I1217 02:21:26.842686 1559626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.267676741s
	I1217 02:21:27.075917 1559626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501699715s
	I1217 02:21:27.110597 1559626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 02:21:27.125570 1559626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 02:21:27.143485 1559626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 02:21:27.143695 1559626 kubeadm.go:319] [mark-control-plane] Marking the node bridge-721629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 02:21:27.156657 1559626 kubeadm.go:319] [bootstrap-token] Using token: c5fq0r.3drjtomejqtz6ndz
	I1217 02:21:27.159652 1559626 out.go:252]   - Configuring RBAC rules ...
	I1217 02:21:27.159818 1559626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 02:21:27.173155 1559626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 02:21:27.182192 1559626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 02:21:27.186242 1559626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 02:21:27.190595 1559626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 02:21:27.194944 1559626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 02:21:27.483051 1559626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 02:21:27.918799 1559626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 02:21:28.483394 1559626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 02:21:28.484608 1559626 kubeadm.go:319] 
	I1217 02:21:28.484686 1559626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 02:21:28.484696 1559626 kubeadm.go:319] 
	I1217 02:21:28.484778 1559626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 02:21:28.484792 1559626 kubeadm.go:319] 
	I1217 02:21:28.484818 1559626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 02:21:28.484881 1559626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 02:21:28.484934 1559626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 02:21:28.484942 1559626 kubeadm.go:319] 
	I1217 02:21:28.484996 1559626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 02:21:28.485004 1559626 kubeadm.go:319] 
	I1217 02:21:28.485051 1559626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 02:21:28.485059 1559626 kubeadm.go:319] 
	I1217 02:21:28.485111 1559626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 02:21:28.485190 1559626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 02:21:28.485262 1559626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 02:21:28.485271 1559626 kubeadm.go:319] 
	I1217 02:21:28.485356 1559626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 02:21:28.485437 1559626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 02:21:28.485444 1559626 kubeadm.go:319] 
	I1217 02:21:28.485530 1559626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c5fq0r.3drjtomejqtz6ndz \
	I1217 02:21:28.485639 1559626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6031ce8e9641affed80fbd3275524f7a99669ab559b9101b175d38b0e710ae78 \
	I1217 02:21:28.485690 1559626 kubeadm.go:319] 	--control-plane 
	I1217 02:21:28.485698 1559626 kubeadm.go:319] 
	I1217 02:21:28.485783 1559626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 02:21:28.485791 1559626 kubeadm.go:319] 
	I1217 02:21:28.485874 1559626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c5fq0r.3drjtomejqtz6ndz \
	I1217 02:21:28.485981 1559626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6031ce8e9641affed80fbd3275524f7a99669ab559b9101b175d38b0e710ae78 
	I1217 02:21:28.489847 1559626 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 02:21:28.490073 1559626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:21:28.490181 1559626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:21:28.490197 1559626 cni.go:84] Creating CNI manager for "bridge"
	I1217 02:21:28.493116 1559626 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 02:21:28.495946 1559626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 02:21:28.503946 1559626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 02:21:28.518498 1559626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 02:21:28.518621 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:28.518633 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-721629 minikube.k8s.io/updated_at=2025_12_17T02_21_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=bridge-721629 minikube.k8s.io/primary=true
	I1217 02:21:28.671948 1559626 ops.go:34] apiserver oom_adj: -16
	I1217 02:21:28.672098 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:29.172246 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:29.672175 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:30.173012 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:30.672459 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:31.172246 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:31.672762 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:32.172861 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:32.673147 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:33.172144 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:33.672754 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:34.173220 1559626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 02:21:34.300267 1559626 kubeadm.go:1114] duration metric: took 5.781747832s to wait for elevateKubeSystemPrivileges
	I1217 02:21:34.300303 1559626 kubeadm.go:403] duration metric: took 23.18049894s to StartCluster
	I1217 02:21:34.300321 1559626 settings.go:142] acquiring lock: {Name:mk239539c562f239b808b1e2f58e8faa48c959ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:34.300381 1559626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 02:21:34.301346 1559626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/kubeconfig: {Name:mke3deb501c0cb452a0cea1fe2d9d2a3341b4d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:21:34.301563 1559626 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 02:21:34.301717 1559626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 02:21:34.301971 1559626 config.go:182] Loaded profile config "bridge-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 02:21:34.302013 1559626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:21:34.302080 1559626 addons.go:70] Setting storage-provisioner=true in profile "bridge-721629"
	I1217 02:21:34.302099 1559626 addons.go:239] Setting addon storage-provisioner=true in "bridge-721629"
	I1217 02:21:34.302127 1559626 host.go:66] Checking if "bridge-721629" exists ...
	I1217 02:21:34.302787 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:34.303186 1559626 addons.go:70] Setting default-storageclass=true in profile "bridge-721629"
	I1217 02:21:34.303212 1559626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-721629"
	I1217 02:21:34.303516 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:34.306436 1559626 out.go:179] * Verifying Kubernetes components...
	I1217 02:21:34.311764 1559626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:21:34.327915 1559626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:21:34.330813 1559626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:21:34.330838 1559626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:21:34.330902 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:34.342435 1559626 addons.go:239] Setting addon default-storageclass=true in "bridge-721629"
	I1217 02:21:34.342472 1559626 host.go:66] Checking if "bridge-721629" exists ...
	I1217 02:21:34.342905 1559626 cli_runner.go:164] Run: docker container inspect bridge-721629 --format={{.State.Status}}
	I1217 02:21:34.369941 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:34.383627 1559626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:21:34.383648 1559626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:21:34.383710 1559626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-721629
	I1217 02:21:34.412862 1559626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/bridge-721629/id_rsa Username:docker}
	I1217 02:21:34.596780 1559626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:21:34.761474 1559626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:21:34.762107 1559626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 02:21:34.816747 1559626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:21:34.850158 1559626 node_ready.go:35] waiting up to 15m0s for node "bridge-721629" to be "Ready" ...
	I1217 02:21:34.852801 1559626 node_ready.go:49] node "bridge-721629" is "Ready"
	I1217 02:21:34.852828 1559626 node_ready.go:38] duration metric: took 2.633488ms for node "bridge-721629" to be "Ready" ...
	I1217 02:21:34.852849 1559626 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:21:34.852908 1559626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:21:35.239115 1559626 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 02:21:35.594275 1559626 api_server.go:72] duration metric: took 1.292672988s to wait for apiserver process to appear ...
	I1217 02:21:35.594303 1559626 api_server.go:88] waiting for apiserver healthz status ...
	I1217 02:21:35.594337 1559626 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:21:35.598322 1559626 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 02:21:35.601268 1559626 addons.go:530] duration metric: took 1.29924416s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 02:21:35.607002 1559626 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 02:21:35.608148 1559626 api_server.go:141] control plane version: v1.34.2
	I1217 02:21:35.608176 1559626 api_server.go:131] duration metric: took 13.865156ms to wait for apiserver health ...
	I1217 02:21:35.608185 1559626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 02:21:35.612663 1559626 system_pods.go:59] 8 kube-system pods found
	I1217 02:21:35.612697 1559626 system_pods.go:61] "coredns-66bc5c9577-b5w2d" [4108db95-a73d-4479-b29c-b487e9349232] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.612707 1559626 system_pods.go:61] "coredns-66bc5c9577-w5297" [42e51dbd-1ef4-4340-b70d-f1beaa443697] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.612714 1559626 system_pods.go:61] "etcd-bridge-721629" [bac49376-fdbc-4aeb-b977-e4a02ab639a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:21:35.612720 1559626 system_pods.go:61] "kube-apiserver-bridge-721629" [fbf75a62-b730-4ad9-8713-a853868aa7b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:21:35.612727 1559626 system_pods.go:61] "kube-controller-manager-bridge-721629" [a26ea41e-83ec-4ece-b481-74dad73c53a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:21:35.612731 1559626 system_pods.go:61] "kube-proxy-glnrh" [2a7fd1bd-8ef7-4f67-9101-cb9701053151] Running
	I1217 02:21:35.612736 1559626 system_pods.go:61] "kube-scheduler-bridge-721629" [b4c98c56-fb88-4798-a746-c9363b5e552d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:21:35.612740 1559626 system_pods.go:61] "storage-provisioner" [650b6c42-193b-471d-95bf-71c326def376] Pending
	I1217 02:21:35.612745 1559626 system_pods.go:74] duration metric: took 4.554239ms to wait for pod list to return data ...
	I1217 02:21:35.612752 1559626 default_sa.go:34] waiting for default service account to be created ...
	I1217 02:21:35.615489 1559626 default_sa.go:45] found service account: "default"
	I1217 02:21:35.615510 1559626 default_sa.go:55] duration metric: took 2.752711ms for default service account to be created ...
	I1217 02:21:35.615519 1559626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 02:21:35.621168 1559626 system_pods.go:86] 8 kube-system pods found
	I1217 02:21:35.621207 1559626 system_pods.go:89] "coredns-66bc5c9577-b5w2d" [4108db95-a73d-4479-b29c-b487e9349232] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.621216 1559626 system_pods.go:89] "coredns-66bc5c9577-w5297" [42e51dbd-1ef4-4340-b70d-f1beaa443697] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.621224 1559626 system_pods.go:89] "etcd-bridge-721629" [bac49376-fdbc-4aeb-b977-e4a02ab639a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:21:35.621230 1559626 system_pods.go:89] "kube-apiserver-bridge-721629" [fbf75a62-b730-4ad9-8713-a853868aa7b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:21:35.621242 1559626 system_pods.go:89] "kube-controller-manager-bridge-721629" [a26ea41e-83ec-4ece-b481-74dad73c53a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:21:35.621252 1559626 system_pods.go:89] "kube-proxy-glnrh" [2a7fd1bd-8ef7-4f67-9101-cb9701053151] Running
	I1217 02:21:35.621270 1559626 system_pods.go:89] "kube-scheduler-bridge-721629" [b4c98c56-fb88-4798-a746-c9363b5e552d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:21:35.621274 1559626 system_pods.go:89] "storage-provisioner" [650b6c42-193b-471d-95bf-71c326def376] Pending
	I1217 02:21:35.621294 1559626 retry.go:31] will retry after 287.262025ms: missing components: kube-dns
	I1217 02:21:35.746257 1559626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-721629" context rescaled to 1 replicas
	I1217 02:21:35.914256 1559626 system_pods.go:86] 8 kube-system pods found
	I1217 02:21:35.914292 1559626 system_pods.go:89] "coredns-66bc5c9577-b5w2d" [4108db95-a73d-4479-b29c-b487e9349232] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.914301 1559626 system_pods.go:89] "coredns-66bc5c9577-w5297" [42e51dbd-1ef4-4340-b70d-f1beaa443697] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:35.914309 1559626 system_pods.go:89] "etcd-bridge-721629" [bac49376-fdbc-4aeb-b977-e4a02ab639a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:21:35.914316 1559626 system_pods.go:89] "kube-apiserver-bridge-721629" [fbf75a62-b730-4ad9-8713-a853868aa7b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:21:35.914324 1559626 system_pods.go:89] "kube-controller-manager-bridge-721629" [a26ea41e-83ec-4ece-b481-74dad73c53a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:21:35.914333 1559626 system_pods.go:89] "kube-proxy-glnrh" [2a7fd1bd-8ef7-4f67-9101-cb9701053151] Running
	I1217 02:21:35.914347 1559626 system_pods.go:89] "kube-scheduler-bridge-721629" [b4c98c56-fb88-4798-a746-c9363b5e552d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:21:35.914361 1559626 system_pods.go:89] "storage-provisioner" [650b6c42-193b-471d-95bf-71c326def376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 02:21:35.914377 1559626 retry.go:31] will retry after 343.706868ms: missing components: kube-dns
	I1217 02:21:36.262682 1559626 system_pods.go:86] 8 kube-system pods found
	I1217 02:21:36.262724 1559626 system_pods.go:89] "coredns-66bc5c9577-b5w2d" [4108db95-a73d-4479-b29c-b487e9349232] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:36.262777 1559626 system_pods.go:89] "coredns-66bc5c9577-w5297" [42e51dbd-1ef4-4340-b70d-f1beaa443697] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:21:36.262790 1559626 system_pods.go:89] "etcd-bridge-721629" [bac49376-fdbc-4aeb-b977-e4a02ab639a0] Running
	I1217 02:21:36.262797 1559626 system_pods.go:89] "kube-apiserver-bridge-721629" [fbf75a62-b730-4ad9-8713-a853868aa7b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:21:36.262804 1559626 system_pods.go:89] "kube-controller-manager-bridge-721629" [a26ea41e-83ec-4ece-b481-74dad73c53a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:21:36.262808 1559626 system_pods.go:89] "kube-proxy-glnrh" [2a7fd1bd-8ef7-4f67-9101-cb9701053151] Running
	I1217 02:21:36.262813 1559626 system_pods.go:89] "kube-scheduler-bridge-721629" [b4c98c56-fb88-4798-a746-c9363b5e552d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:21:36.262818 1559626 system_pods.go:89] "storage-provisioner" [650b6c42-193b-471d-95bf-71c326def376] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 02:21:36.262840 1559626 system_pods.go:126] duration metric: took 647.302084ms to wait for k8s-apps to be running ...
	I1217 02:21:36.262857 1559626 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 02:21:36.262936 1559626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:21:36.277773 1559626 system_svc.go:56] duration metric: took 14.905298ms WaitForService to wait for kubelet
	I1217 02:21:36.277799 1559626 kubeadm.go:587] duration metric: took 1.976201983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:21:36.277817 1559626 node_conditions.go:102] verifying NodePressure condition ...
	I1217 02:21:36.283200 1559626 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 02:21:36.283232 1559626 node_conditions.go:123] node cpu capacity is 2
	I1217 02:21:36.283245 1559626 node_conditions.go:105] duration metric: took 5.423894ms to run NodePressure ...
	I1217 02:21:36.283259 1559626 start.go:242] waiting for startup goroutines ...
	I1217 02:21:36.283266 1559626 start.go:247] waiting for cluster config update ...
	I1217 02:21:36.283277 1559626 start.go:256] writing updated cluster config ...
	I1217 02:21:36.283577 1559626 ssh_runner.go:195] Run: rm -f paused
	I1217 02:21:36.287820 1559626 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:21:36.299168 1559626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b5w2d" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:21:38.304783 1559626 pod_ready.go:104] pod "coredns-66bc5c9577-b5w2d" is not "Ready", error: <nil>
	W1217 02:21:40.305321 1559626 pod_ready.go:104] pod "coredns-66bc5c9577-b5w2d" is not "Ready", error: <nil>
	I1217 02:21:42.304642 1559626 pod_ready.go:99] pod "coredns-66bc5c9577-b5w2d" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-b5w2d" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-b5w2d" not found
	I1217 02:21:42.304669 1559626 pod_ready.go:86] duration metric: took 6.005472763s for pod "coredns-66bc5c9577-b5w2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:42.304680 1559626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w5297" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:21:44.310206 1559626 pod_ready.go:104] pod "coredns-66bc5c9577-w5297" is not "Ready", error: <nil>
	I1217 02:21:44.809866 1559626 pod_ready.go:94] pod "coredns-66bc5c9577-w5297" is "Ready"
	I1217 02:21:44.809895 1559626 pod_ready.go:86] duration metric: took 2.505208492s for pod "coredns-66bc5c9577-w5297" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:44.812726 1559626 pod_ready.go:83] waiting for pod "etcd-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:44.817529 1559626 pod_ready.go:94] pod "etcd-bridge-721629" is "Ready"
	I1217 02:21:44.817559 1559626 pod_ready.go:86] duration metric: took 4.803177ms for pod "etcd-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:44.820221 1559626 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:44.824851 1559626 pod_ready.go:94] pod "kube-apiserver-bridge-721629" is "Ready"
	I1217 02:21:44.824888 1559626 pod_ready.go:86] duration metric: took 4.632328ms for pod "kube-apiserver-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:44.827304 1559626 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:45.011849 1559626 pod_ready.go:94] pod "kube-controller-manager-bridge-721629" is "Ready"
	I1217 02:21:45.011878 1559626 pod_ready.go:86] duration metric: took 184.546504ms for pod "kube-controller-manager-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:45.210865 1559626 pod_ready.go:83] waiting for pod "kube-proxy-glnrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:45.608117 1559626 pod_ready.go:94] pod "kube-proxy-glnrh" is "Ready"
	I1217 02:21:45.608144 1559626 pod_ready.go:86] duration metric: took 397.247433ms for pod "kube-proxy-glnrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:45.808426 1559626 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:46.207967 1559626 pod_ready.go:94] pod "kube-scheduler-bridge-721629" is "Ready"
	I1217 02:21:46.207998 1559626 pod_ready.go:86] duration metric: took 399.546879ms for pod "kube-scheduler-bridge-721629" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:21:46.208011 1559626 pod_ready.go:40] duration metric: took 9.920161112s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:21:46.263791 1559626 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 02:21:46.266915 1559626 out.go:179] * Done! kubectl is now configured to use "bridge-721629" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348124275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348135139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348172948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348191221Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348204899Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348219340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348228637Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348243127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348261737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348290923Z" level=info msg="Connect containerd service"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.348584284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.349144971Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.367921231Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368000485Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368028342Z" level=info msg="Start subscribing containerd event"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.368075579Z" level=info msg="Start recovering state"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409358181Z" level=info msg="Start event monitor"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409558676Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409664105Z" level=info msg="Start streaming server"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409753861Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.409976198Z" level=info msg="runtime interface starting up..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410064724Z" level=info msg="starting plugins..."
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.410151470Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 02:03:12 no-preload-178365 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 02:03:12 no-preload-178365 containerd[555]: time="2025-12-17T02:03:12.416611073Z" level=info msg="containerd successfully booted in 0.090598s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:23:10.933184   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:10.935419   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:10.936054   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:10.937834   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:10.938362   10393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:57] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:23:10 up  8:05,  0 user,  load average: 0.60, 1.26, 1.34
	Linux no-preload-178365 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:23:07 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1592.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:08 no-preload-178365 kubelet[10257]: E1217 02:23:08.165989   10257 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1593.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:08 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:08 no-preload-178365 kubelet[10262]: E1217 02:23:08.916309   10262 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:08 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:09 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1594.
	Dec 17 02:23:09 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:09 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:09 no-preload-178365 kubelet[10267]: E1217 02:23:09.683488   10267 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:09 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:09 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:10 no-preload-178365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1595.
	Dec 17 02:23:10 no-preload-178365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:10 no-preload-178365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:10 no-preload-178365 kubelet[10303]: E1217 02:23:10.442372   10303 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:10 no-preload-178365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:10 no-preload-178365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178365 -n no-preload-178365: exit status 2 (348.196687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-178365" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (291.44s)

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.54
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.17
18 TestDownloadOnly/v1.34.2/DeleteAll 0.37
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.93
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.11
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 145.11
38 TestAddons/serial/Volcano 40.72
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 10.05
44 TestAddons/parallel/Registry 17.49
45 TestAddons/parallel/RegistryCreds 0.8
46 TestAddons/parallel/Ingress 18.65
47 TestAddons/parallel/InspektorGadget 11.73
48 TestAddons/parallel/MetricsServer 5.85
50 TestAddons/parallel/CSI 53.53
51 TestAddons/parallel/Headlamp 12.28
52 TestAddons/parallel/CloudSpanner 6.58
53 TestAddons/parallel/LocalPath 53.48
54 TestAddons/parallel/NvidiaDevicePlugin 5.61
55 TestAddons/parallel/Yakd 11.79
57 TestAddons/StoppedEnableDisable 12.41
58 TestCertOptions 48.98
59 TestCertExpiration 222.77
61 TestForceSystemdFlag 38.29
62 TestForceSystemdEnv 34.24
63 TestDockerEnvContainerd 50.48
67 TestErrorSpam/setup 31.41
68 TestErrorSpam/start 0.84
69 TestErrorSpam/status 1.16
70 TestErrorSpam/pause 1.81
71 TestErrorSpam/unpause 1.79
72 TestErrorSpam/stop 1.64
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 53.86
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.83
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
84 TestFunctional/serial/CacheCmd/cache/add_local 1.27
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 49.54
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.51
96 TestFunctional/serial/InvalidService 4.58
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 7.94
100 TestFunctional/parallel/DryRun 0.55
101 TestFunctional/parallel/InternationalLanguage 0.26
102 TestFunctional/parallel/StatusCmd 1.36
106 TestFunctional/parallel/ServiceCmdConnect 8.63
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 19.74
110 TestFunctional/parallel/SSHCmd 0.93
111 TestFunctional/parallel/CpCmd 2.18
113 TestFunctional/parallel/FileSync 0.39
114 TestFunctional/parallel/CertSync 2.24
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
122 TestFunctional/parallel/License 0.4
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 8.15
139 TestFunctional/parallel/ServiceCmd/List 0.5
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
142 TestFunctional/parallel/ServiceCmd/Format 0.46
143 TestFunctional/parallel/ServiceCmd/URL 0.5
144 TestFunctional/parallel/MountCmd/specific-port 2.17
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 0.83
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.24
153 TestFunctional/parallel/ImageCommands/Setup 0.69
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.25
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.11
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.85
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.1
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.95
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.5
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.75
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.62
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.63
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 11.36
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.05
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.25
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.49
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.22
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.11
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.09
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.48
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.17
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.17
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.05
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 183.48
265 TestMultiControlPlane/serial/DeployApp 8.48
266 TestMultiControlPlane/serial/PingHostFromPods 1.63
267 TestMultiControlPlane/serial/AddWorkerNode 59.63
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
270 TestMultiControlPlane/serial/CopyFile 19.97
271 TestMultiControlPlane/serial/StopSecondaryNode 12.99
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.1
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.42
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.67
279 TestMultiControlPlane/serial/RestartCluster 60.54
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
281 TestMultiControlPlane/serial/AddSecondaryNode 80.55
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 48.38
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.71
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.74
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.02
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 42.83
313 TestKicCustomNetwork/use_default_bridge_network 35.9
314 TestKicExistingNetwork 33.11
315 TestKicCustomSubnet 36.67
316 TestKicStaticIP 35.97
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.37
321 TestMountStart/serial/StartWithMountFirst 8.14
322 TestMountStart/serial/VerifyMountFirst 0.29
323 TestMountStart/serial/StartWithMountSecond 8.1
324 TestMountStart/serial/VerifyMountSecond 0.26
325 TestMountStart/serial/DeleteFirst 1.74
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.51
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 106.97
333 TestMultiNode/serial/DeployApp2Nodes 4.71
334 TestMultiNode/serial/PingHostFrom2Pods 1.01
335 TestMultiNode/serial/AddNode 27.58
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.68
338 TestMultiNode/serial/CopyFile 10.23
339 TestMultiNode/serial/StopNode 2.4
340 TestMultiNode/serial/StartAfterStop 7.95
341 TestMultiNode/serial/RestartKeepsNodes 78.1
342 TestMultiNode/serial/DeleteNode 5.62
343 TestMultiNode/serial/StopMultiNode 24.24
344 TestMultiNode/serial/RestartMultiNode 57.78
345 TestMultiNode/serial/ValidateNameConflict 36.8
350 TestPreload 120.71
352 TestScheduledStopUnix 109
355 TestInsufficientStorage 12.35
356 TestRunningBinaryUpgrade 60.52
359 TestMissingContainerUpgrade 138.48
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 44.03
363 TestNoKubernetes/serial/StartWithStopK8s 8.9
364 TestNoKubernetes/serial/Start 9
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
367 TestNoKubernetes/serial/ProfileList 1.69
368 TestNoKubernetes/serial/Stop 1.39
369 TestNoKubernetes/serial/StartNoArgs 6.9
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
371 TestStoppedBinaryUpgrade/Setup 0.8
372 TestStoppedBinaryUpgrade/Upgrade 304.01
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.06
382 TestPause/serial/Start 47.29
383 TestPause/serial/SecondStartNoReconfiguration 6.1
384 TestPause/serial/Pause 0.72
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.66
387 TestPause/serial/PauseAgain 0.8
388 TestPause/serial/DeletePaused 2.75
389 TestPause/serial/VerifyDeletedResources 0.4
397 TestNetworkPlugins/group/false 3.72
402 TestStartStop/group/old-k8s-version/serial/FirstStart 69.48
404 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81
405 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.29
407 TestStartStop/group/old-k8s-version/serial/Stop 12.37
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
409 TestStartStop/group/old-k8s-version/serial/SecondStart 54.72
410 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
411 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
412 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.23
413 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
414 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
415 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.72
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
418 TestStartStop/group/old-k8s-version/serial/Pause 4.65
420 TestStartStop/group/embed-certs/serial/FirstStart 82
421 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
423 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
424 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
427 TestStartStop/group/embed-certs/serial/DeployApp 9.38
428 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
429 TestStartStop/group/embed-certs/serial/Stop 12.13
430 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/embed-certs/serial/SecondStart 53.41
432 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
434 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
435 TestStartStop/group/embed-certs/serial/Pause 3.03
440 TestStartStop/group/no-preload/serial/Stop 1.34
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.3
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
453 TestNetworkPlugins/group/auto/Start 51.53
454 TestNetworkPlugins/group/auto/KubeletFlags 0.3
455 TestNetworkPlugins/group/auto/NetCatPod 9.31
456 TestNetworkPlugins/group/auto/DNS 0.17
457 TestNetworkPlugins/group/auto/Localhost 0.13
458 TestNetworkPlugins/group/auto/HairPin 0.18
459 TestNetworkPlugins/group/kindnet/Start 50.67
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
462 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
463 TestNetworkPlugins/group/kindnet/DNS 0.18
464 TestNetworkPlugins/group/kindnet/Localhost 0.15
465 TestNetworkPlugins/group/kindnet/HairPin 0.14
466 TestNetworkPlugins/group/calico/Start 69.61
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/calico/KubeletFlags 0.33
469 TestNetworkPlugins/group/calico/NetCatPod 9.33
470 TestNetworkPlugins/group/calico/DNS 0.18
471 TestNetworkPlugins/group/calico/Localhost 0.15
472 TestNetworkPlugins/group/calico/HairPin 0.15
473 TestNetworkPlugins/group/custom-flannel/Start 56.14
474 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
475 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
476 TestNetworkPlugins/group/custom-flannel/DNS 0.17
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
479 TestNetworkPlugins/group/enable-default-cni/Start 71.1
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
486 TestNetworkPlugins/group/flannel/Start 55.37
487 TestNetworkPlugins/group/flannel/ControllerPod 6.01
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
489 TestNetworkPlugins/group/flannel/NetCatPod 9.26
490 TestNetworkPlugins/group/flannel/DNS 0.18
491 TestNetworkPlugins/group/flannel/Localhost 0.2
492 TestNetworkPlugins/group/flannel/HairPin 0.15
493 TestNetworkPlugins/group/bridge/Start 48.2
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
495 TestNetworkPlugins/group/bridge/NetCatPod 10.24
496 TestNetworkPlugins/group/bridge/DNS 0.19
497 TestNetworkPlugins/group/bridge/Localhost 0.15
498 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-490252 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-490252 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.797983349s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:27:33.465925 1211243 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1217 00:27:33.466001 1211243 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-490252
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-490252: exit status 85 (93.588664ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-490252 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-490252 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:27:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:27:27.710953 1211248 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:27:27.711140 1211248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:27.711172 1211248 out.go:374] Setting ErrFile to fd 2...
	I1217 00:27:27.711374 1211248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:27.711754 1211248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	W1217 00:27:27.711940 1211248 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22168-1208015/.minikube/config/config.json: open /home/jenkins/minikube-integration/22168-1208015/.minikube/config/config.json: no such file or directory
	I1217 00:27:27.712378 1211248 out.go:368] Setting JSON to true
	I1217 00:27:27.713208 1211248 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22198,"bootTime":1765909050,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:27:27.713298 1211248 start.go:143] virtualization:  
	I1217 00:27:27.719023 1211248 out.go:99] [download-only-490252] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1217 00:27:27.719332 1211248 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 00:27:27.719379 1211248 notify.go:221] Checking for updates...
	I1217 00:27:27.722855 1211248 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:27:27.726475 1211248 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:27:27.729747 1211248 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:27:27.733085 1211248 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:27:27.736145 1211248 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:27:27.742282 1211248 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:27:27.742563 1211248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:27:27.767217 1211248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:27:27.767321 1211248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:27.822355 1211248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 00:27:27.813328432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:27.822467 1211248 docker.go:319] overlay module found
	I1217 00:27:27.825598 1211248 out.go:99] Using the docker driver based on user configuration
	I1217 00:27:27.825656 1211248 start.go:309] selected driver: docker
	I1217 00:27:27.825667 1211248 start.go:927] validating driver "docker" against <nil>
	I1217 00:27:27.825784 1211248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:27.881740 1211248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 00:27:27.873095308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:27.881891 1211248 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:27:27.882186 1211248 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:27:27.882339 1211248 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:27:27.885628 1211248 out.go:171] Using Docker driver with root privileges
	I1217 00:27:27.888799 1211248 cni.go:84] Creating CNI manager for ""
	I1217 00:27:27.888859 1211248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 00:27:27.888871 1211248 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:27:27.888959 1211248 start.go:353] cluster config:
	{Name:download-only-490252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-490252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:27:27.891915 1211248 out.go:99] Starting "download-only-490252" primary control-plane node in "download-only-490252" cluster
	I1217 00:27:27.891933 1211248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 00:27:27.894850 1211248 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:27:27.894904 1211248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 00:27:27.895074 1211248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:27:27.914730 1211248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:27:27.914754 1211248 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:27:27.914937 1211248 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:27:27.915054 1211248 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:27:27.951388 1211248 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:27:27.951413 1211248 cache.go:65] Caching tarball of preloaded images
	I1217 00:27:27.951587 1211248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 00:27:27.955100 1211248 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 00:27:27.955133 1211248 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1217 00:27:28.045687 1211248 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1217 00:27:28.045856 1211248 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1217 00:27:31.984038 1211248 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1217 00:27:31.984629 1211248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/download-only-490252/config.json ...
	I1217 00:27:31.984698 1211248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/download-only-490252/config.json: {Name:mkad8499733f722363230bee7362f2c3f655c447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:27:31.985000 1211248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 00:27:31.985273 1211248 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-490252 host does not exist
	  To start a cluster, run: "minikube start -p download-only-490252"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-490252
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-466930 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-466930 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.535254551s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:27:37.444063 1211243 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1217 00:27:37.444101 1211243 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-466930
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-466930: exit status 85 (165.512355ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-490252 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-490252 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ delete  │ -p download-only-490252                                                                                                                                                               │ download-only-490252 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ start   │ -o=json --download-only -p download-only-466930 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-466930 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:27:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:27:33.950440 1211447 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:27:33.950578 1211447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:33.950591 1211447 out.go:374] Setting ErrFile to fd 2...
	I1217 00:27:33.950597 1211447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:33.951437 1211447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:27:33.951913 1211447 out.go:368] Setting JSON to true
	I1217 00:27:33.952744 1211447 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22204,"bootTime":1765909050,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:27:33.952838 1211447 start.go:143] virtualization:  
	I1217 00:27:33.956378 1211447 out.go:99] [download-only-466930] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:27:33.956637 1211447 notify.go:221] Checking for updates...
	I1217 00:27:33.959645 1211447 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:27:33.962878 1211447 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:27:33.965807 1211447 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:27:33.968832 1211447 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:27:33.971781 1211447 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:27:33.977491 1211447 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:27:33.977786 1211447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:27:34.002979 1211447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:27:34.003097 1211447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:34.059529 1211447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 00:27:34.050335875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:34.059634 1211447 docker.go:319] overlay module found
	I1217 00:27:34.062765 1211447 out.go:99] Using the docker driver based on user configuration
	I1217 00:27:34.062805 1211447 start.go:309] selected driver: docker
	I1217 00:27:34.062813 1211447 start.go:927] validating driver "docker" against <nil>
	I1217 00:27:34.062912 1211447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:34.121702 1211447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 00:27:34.112895784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:34.121860 1211447 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:27:34.122127 1211447 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:27:34.122279 1211447 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:27:34.125418 1211447 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-466930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-466930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-466930
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-210036 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-210036 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.931774528s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:27:42.133352 1211243 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1217 00:27:42.133394 1211243 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-210036
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-210036: exit status 85 (106.322962ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-490252 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-490252 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ delete  │ -p download-only-490252                                                                                                                                                                      │ download-only-490252 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ start   │ -o=json --download-only -p download-only-466930 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-466930 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ delete  │ -p download-only-466930                                                                                                                                                                      │ download-only-466930 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │ 17 Dec 25 00:27 UTC │
	│ start   │ -o=json --download-only -p download-only-210036 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-210036 │ jenkins │ v1.37.0 │ 17 Dec 25 00:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:27:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:27:38.255876 1211647 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:27:38.256161 1211647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:38.256205 1211647 out.go:374] Setting ErrFile to fd 2...
	I1217 00:27:38.256224 1211647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:27:38.256553 1211647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:27:38.257131 1211647 out.go:368] Setting JSON to true
	I1217 00:27:38.258192 1211647 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22209,"bootTime":1765909050,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:27:38.258308 1211647 start.go:143] virtualization:  
	I1217 00:27:38.302528 1211647 out.go:99] [download-only-210036] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:27:38.302817 1211647 notify.go:221] Checking for updates...
	I1217 00:27:38.335440 1211647 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:27:38.365455 1211647 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:27:38.398671 1211647 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:27:38.430451 1211647 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:27:38.462845 1211647 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:27:38.526320 1211647 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:27:38.526629 1211647 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:27:38.548155 1211647 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:27:38.548270 1211647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:38.604970 1211647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:27:38.594651915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:38.605071 1211647 docker.go:319] overlay module found
	I1217 00:27:38.648585 1211647 out.go:99] Using the docker driver based on user configuration
	I1217 00:27:38.648635 1211647 start.go:309] selected driver: docker
	I1217 00:27:38.648659 1211647 start.go:927] validating driver "docker" against <nil>
	I1217 00:27:38.648771 1211647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:27:38.705675 1211647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:27:38.696413422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:27:38.705844 1211647 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:27:38.706128 1211647 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:27:38.706269 1211647 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:27:38.744428 1211647 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-210036 host does not exist
	  To start a cluster, run: "minikube start -p download-only-210036"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-210036
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:27:43.458140 1211243 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-581036 --alsologtostderr --binary-mirror http://127.0.0.1:46569 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-581036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-581036
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-799486
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-799486: exit status 85 (80.620608ms)

                                                
                                                
-- stdout --
	* Profile "addons-799486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-799486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-799486
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-799486: exit status 85 (87.092033ms)

                                                
                                                
-- stdout --
	* Profile "addons-799486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-799486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (145.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-799486 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-799486 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m25.107237122s)
--- PASS: TestAddons/Setup (145.11s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 61.652361ms
addons_test.go:886: volcano-controller stabilized in 62.056608ms
addons_test.go:878: volcano-admission stabilized in 62.595477ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-nkfbd" [1d4e527a-d687-4cd7-8ff7-e0d4af64f70a] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004308627s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-l5f6b" [ba944e76-93fd-4c4c-b0ab-4429f5f91f04] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004232659s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-6hng9" [ab28ae7f-ff33-4de5-a893-bcffd36a84d9] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003633734s
addons_test.go:905: (dbg) Run:  kubectl --context addons-799486 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-799486 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-799486 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [641146c0-6059-4ee8-81ea-bd81697bc4f2] Pending
helpers_test.go:353: "test-job-nginx-0" [641146c0-6059-4ee8-81ea-bd81697bc4f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [641146c0-6059-4ee8-81ea-bd81697bc4f2] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003605448s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable volcano --alsologtostderr -v=1: (11.997747315s)
--- PASS: TestAddons/serial/Volcano (40.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-799486 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-799486 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.05s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-799486 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-799486 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7e244e72-0dea-42fb-8030-76aca8c7f068] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7e244e72-0dea-42fb-8030-76aca8c7f068] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004780046s
addons_test.go:696: (dbg) Run:  kubectl --context addons-799486 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-799486 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-799486 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-799486 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.05s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.776736ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-w9d9k" [04597169-ad83-41e9-9641-fcd21708161b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00463249s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ptjkh" [cda718d3-7a73-442b-9633-27f5a14125f5] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006014739s
addons_test.go:394: (dbg) Run:  kubectl --context addons-799486 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-799486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-799486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.50506998s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 ip
2025/12/17 00:31:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.49s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.8s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.418713ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-799486
addons_test.go:334: (dbg) Run:  kubectl --context addons-799486 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-799486 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-799486 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-799486 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [01b329e2-acbc-4048-919c-4301261016c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [01b329e2-acbc-4048-919c-4301261016c2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00316427s
I1217 00:32:46.524439 1211243 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-799486 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable ingress-dns --alsologtostderr -v=1: (1.122924682s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable ingress --alsologtostderr -v=1: (7.793261444s)
--- PASS: TestAddons/parallel/Ingress (18.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-dj89f" [acf358c2-bac7-47d1-be3f-051b077789f5] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002926321s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable inspektor-gadget --alsologtostderr -v=1: (5.724563559s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.510283ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-cmzq4" [434f17e3-2d49-466e-af39-f1386df8b080] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004056439s
addons_test.go:465: (dbg) Run:  kubectl --context addons-799486 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:31:21.346681 1211243 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 00:31:21.350925 1211243 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:31:21.350956 1211243 kapi.go:107] duration metric: took 7.371111ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.383353ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-799486 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-799486 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [763d8cf9-93b4-4b67-8329-246b884763d9] Pending
helpers_test.go:353: "task-pv-pod" [763d8cf9-93b4-4b67-8329-246b884763d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [763d8cf9-93b4-4b67-8329-246b884763d9] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003131282s
addons_test.go:574: (dbg) Run:  kubectl --context addons-799486 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-799486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-799486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-799486 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-799486 delete pod task-pv-pod: (1.296488532s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-799486 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-799486 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-799486 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [3fa8fa8f-ec0e-4836-838a-2e4add665170] Pending
helpers_test.go:353: "task-pv-pod-restore" [3fa8fa8f-ec0e-4836-838a-2e4add665170] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [3fa8fa8f-ec0e-4836-838a-2e4add665170] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003324006s
addons_test.go:616: (dbg) Run:  kubectl --context addons-799486 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-799486 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-799486 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.896821997s)
--- PASS: TestAddons/parallel/CSI (53.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-799486 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-4k9wb" [ca40cf83-1395-447f-ab57-42c9102b3b6a] Pending
helpers_test.go:353: "headlamp-dfcdc64b-4k9wb" [ca40cf83-1395-447f-ab57-42c9102b3b6a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-4k9wb" [ca40cf83-1395-447f-ab57-42c9102b3b6a] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.002997355s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-8z4q4" [3e657c93-17bb-4122-8ed9-95606be4d673] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003607628s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-799486 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-799486 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [36114693-8eec-4d66-a407-e83a51562d9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [36114693-8eec-4d66-a407-e83a51562d9a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [36114693-8eec-4d66-a407-e83a51562d9a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.012892339s
addons_test.go:969: (dbg) Run:  kubectl --context addons-799486 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 ssh "cat /opt/local-path-provisioner/pvc-30b0e95b-6d3c-4a6f-b957-8528969914df_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-799486 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-799486 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.073833783s)
--- PASS: TestAddons/parallel/LocalPath (53.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-b57zh" [98beb070-6791-4e27-be4a-b6bdbce915fa] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003887198s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-bc5q9" [605bef46-4ff5-413e-af7c-20fa66f1218f] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002861221s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-799486 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-799486 addons disable yakd --alsologtostderr -v=1: (5.79030805s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-799486
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-799486: (12.119299453s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-799486
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-799486
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-799486
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (48.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-097858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-097858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (45.457355928s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-097858 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-097858 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-097858 -- "sudo cat /etc/kubernetes/admin.conf"
E1217 01:50:09.436616 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "cert-options-097858" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-097858
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-097858: (2.681580379s)
--- PASS: TestCertOptions (48.98s)

                                                
                                    
x
+
TestCertExpiration (222.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-741064 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-741064 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (31.622714498s)
E1217 01:46:56.881935 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:48:19.949930 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-741064 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-741064 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.279050542s)
helpers_test.go:176: Cleaning up "cert-expiration-741064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-741064
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-741064: (2.870753867s)
--- PASS: TestCertExpiration (222.77s)

                                                
                                    
x
+
TestForceSystemdFlag (38.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-914595 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1217 01:44:50.422569 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-914595 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.969124231s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-914595 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-914595" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-914595
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-914595: (2.022559367s)
--- PASS: TestForceSystemdFlag (38.29s)

                                                
                                    
x
+
TestForceSystemdEnv (34.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-113128 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1217 01:45:09.433510 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-113128 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.845729196s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-113128 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-113128" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-113128
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-113128: (2.054510745s)
--- PASS: TestForceSystemdEnv (34.24s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.48s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-160928 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-160928 --driver=docker  --container-runtime=containerd: (34.202385102s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-160928"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-160928": (1.298405687s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AP0UYVns18vX/agent.1230976" SSH_AGENT_PID="1230977" DOCKER_HOST=ssh://docker@127.0.0.1:33928 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AP0UYVns18vX/agent.1230976" SSH_AGENT_PID="1230977" DOCKER_HOST=ssh://docker@127.0.0.1:33928 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AP0UYVns18vX/agent.1230976" SSH_AGENT_PID="1230977" DOCKER_HOST=ssh://docker@127.0.0.1:33928 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.921241787s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AP0UYVns18vX/agent.1230976" SSH_AGENT_PID="1230977" DOCKER_HOST=ssh://docker@127.0.0.1:33928 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-160928" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-160928
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-160928: (2.392364658s)
--- PASS: TestDockerEnvContainerd (50.48s)

                                                
                                    
x
+
TestErrorSpam/setup (31.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-262216 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-262216 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-262216 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-262216 --driver=docker  --container-runtime=containerd: (31.410521588s)
--- PASS: TestErrorSpam/setup (31.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 stop: (1.435939401s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-262216 --log_dir /tmp/nospam-262216 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1217 00:35:09.438651 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.445205 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.456601 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.477990 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.519391 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.600815 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:09.762295 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:10.083956 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:10.726015 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:12.012785 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:14.574157 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:19.696163 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:29.937476 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-416001 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (53.860927904s)
--- PASS: TestFunctional/serial/StartWithProxy (53.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:35:43.209719 1211243 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --alsologtostderr -v=8
E1217 00:35:50.419694 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-416001 --alsologtostderr -v=8: (7.819354051s)
functional_test.go:678: soft start took 7.827774004s for "functional-416001" cluster.
I1217 00:35:51.029394 1211243 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-416001 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:3.1: (1.290128437s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:3.3: (1.174857289s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 cache add registry.k8s.io/pause:latest: (1.084042036s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-416001 /tmp/TestFunctionalserialCacheCmdcacheadd_local208980683/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache add minikube-local-cache-test:functional-416001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache delete minikube-local-cache-test:functional-416001
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-416001
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.47724ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 kubectl -- --context functional-416001 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-416001 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:36:31.381772 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-416001 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.537780639s)
functional_test.go:776: restart took 49.537881694s for "functional-416001" cluster.
I1217 00:36:48.266391 1211243 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (49.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-416001 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 logs: (1.471310221s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 logs --file /tmp/TestFunctionalserialLogsFileCmd666520443/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 logs --file /tmp/TestFunctionalserialLogsFileCmd666520443/001/logs.txt: (1.513067006s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-416001 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-416001
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-416001: exit status 115 (388.696366ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31835 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-416001 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 config get cpus: exit status 14 (69.753336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 config get cpus: exit status 14 (82.1185ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-416001 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-416001 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1246799: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-416001 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.989841ms)

                                                
                                                
-- stdout --
	* [functional-416001] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:37:27.748047 1245949 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:27.748574 1245949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:27.748614 1245949 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:27.748637 1245949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:27.749362 1245949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:37:27.749955 1245949 out.go:368] Setting JSON to false
	I1217 00:37:27.750933 1245949 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22798,"bootTime":1765909050,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:37:27.751089 1245949 start.go:143] virtualization:  
	I1217 00:37:27.756550 1245949 out.go:179] * [functional-416001] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:37:27.759483 1245949 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:37:27.759535 1245949 notify.go:221] Checking for updates...
	I1217 00:37:27.765113 1245949 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:27.768067 1245949 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:37:27.770877 1245949 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:37:27.773686 1245949 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:37:27.776534 1245949 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:37:27.779771 1245949 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 00:37:27.780336 1245949 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:27.806068 1245949 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:37:27.806196 1245949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:27.892237 1245949 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 00:37:27.881046023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:37:27.892344 1245949 docker.go:319] overlay module found
	I1217 00:37:27.895629 1245949 out.go:179] * Using the docker driver based on existing profile
	I1217 00:37:27.898543 1245949 start.go:309] selected driver: docker
	I1217 00:37:27.898567 1245949 start.go:927] validating driver "docker" against &{Name:functional-416001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-416001 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:27.898663 1245949 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:37:27.902075 1245949 out.go:203] 
	W1217 00:37:27.904920 1245949 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:37:27.907843 1245949 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-416001 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-416001 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (256.777541ms)

                                                
                                                
-- stdout --
	* [functional-416001] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:37:27.510235 1245865 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:27.510473 1245865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:27.510486 1245865 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:27.510493 1245865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:27.510894 1245865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 00:37:27.511292 1245865 out.go:368] Setting JSON to false
	I1217 00:37:27.514449 1245865 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22798,"bootTime":1765909050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 00:37:27.514523 1245865 start.go:143] virtualization:  
	I1217 00:37:27.518079 1245865 out.go:179] * [functional-416001] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 00:37:27.521142 1245865 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:37:27.521440 1245865 notify.go:221] Checking for updates...
	I1217 00:37:27.527521 1245865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:27.530620 1245865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 00:37:27.533587 1245865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 00:37:27.537128 1245865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:37:27.539998 1245865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:37:27.543408 1245865 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 00:37:27.543991 1245865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:27.595038 1245865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:37:27.595167 1245865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:27.676140 1245865 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 00:37:27.664052618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:37:27.676251 1245865 docker.go:319] overlay module found
	I1217 00:37:27.679538 1245865 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:37:27.681741 1245865 start.go:309] selected driver: docker
	I1217 00:37:27.681753 1245865 start.go:927] validating driver "docker" against &{Name:functional-416001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-416001 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:27.681868 1245865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:37:27.684708 1245865 out.go:203] 
	W1217 00:37:27.687723 1245865 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:37:27.693754 1245865 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-416001 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-416001 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-d7c4j" [b8a9a33a-4ec1-40f9-a5c7-ef1b0f2ffda3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-d7c4j" [b8a9a33a-4ec1-40f9-a5c7-ef1b0f2ffda3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003135413s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31841
functional_test.go:1680: http://192.168.49.2:31841: success! body:
Request served by hello-node-connect-7d85dfc575-d7c4j

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31841
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [6195319f-0d88-4283-8de7-3a6be6e52413] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003547734s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-416001 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-416001 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-416001 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-416001 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9eb107b4-855b-4ee9-a790-0599809a52bd] Pending
helpers_test.go:353: "sp-pod" [9eb107b4-855b-4ee9-a790-0599809a52bd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003245828s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-416001 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-416001 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-416001 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ebef25ee-3123-4583-a7e1-75f3dbf617d2] Pending
helpers_test.go:353: "sp-pod" [ebef25ee-3123-4583-a7e1-75f3dbf617d2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003642804s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-416001 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh -n functional-416001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cp functional-416001:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170430960/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh -n functional-416001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh -n functional-416001 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1211243/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /etc/test/nested/copy/1211243/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1211243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /etc/ssl/certs/1211243.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1211243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /usr/share/ca-certificates/1211243.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12112432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /etc/ssl/certs/12112432.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12112432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /usr/share/ca-certificates/12112432.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-416001 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh "sudo systemctl is-active docker": exit status 1 (425.25597ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh "sudo systemctl is-active crio": exit status 1 (337.313378ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1242905: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-416001 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [159b0456-257b-4e0f-8448-898113bdd330] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [159b0456-257b-4e0f-8448-898113bdd330] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003304733s
I1217 00:37:06.387928 1211243 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-416001 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.82.160 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-416001 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-416001 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-416001 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-j9b76" [376fd232-1594-4cf7-8494-196e8809d599] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-j9b76" [376fd232-1594-4cf7-8494-196e8809d599] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00304638s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.732598ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.782855ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "364.147947ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.569712ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdany-port1962444683/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765931837911715145" to /tmp/TestFunctionalparallelMountCmdany-port1962444683/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765931837911715145" to /tmp/TestFunctionalparallelMountCmdany-port1962444683/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765931837911715145" to /tmp/TestFunctionalparallelMountCmdany-port1962444683/001/test-1765931837911715145
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (385.624526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:37:18.299026 1211243 retry.go:31] will retry after 413.825417ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:37 test-1765931837911715145
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh cat /mount-9p/test-1765931837911715145
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-416001 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d55e2a71-4793-40a5-9192-54c81f367974] Pending
helpers_test.go:353: "busybox-mount" [d55e2a71-4793-40a5-9192-54c81f367974] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [d55e2a71-4793-40a5-9192-54c81f367974] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [d55e2a71-4793-40a5-9192-54c81f367974] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004972431s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-416001 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdany-port1962444683/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service list -o json
functional_test.go:1504: Took "512.657407ms" to run "out/minikube-linux-arm64 -p functional-416001 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31326
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31326
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdspecific-port2625302920/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (503.36793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:37:26.562065 1211243 retry.go:31] will retry after 322.672595ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdspecific-port2625302920/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh "sudo umount -f /mount-9p": exit status 1 (361.000948ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-416001 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdspecific-port2625302920/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-416001 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-416001 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2945792366/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-416001 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-416001
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-416001
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-416001 image ls --format short --alsologtostderr:
I1217 00:37:39.562184 1248515 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:39.562471 1248515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:39.562499 1248515 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:39.562518 1248515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:39.562863 1248515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:39.563885 1248515 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:39.564080 1248515 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:39.564778 1248515 cli_runner.go:164] Run: docker container inspect functional-416001 --format={{.State.Status}}
I1217 00:37:39.582922 1248515 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:39.583001 1248515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-416001
I1217 00:37:39.611350 1248515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-416001/id_rsa Username:docker}
I1217 00:37:39.708322 1248515 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-416001 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/kicbase/echo-server               │ functional-416001  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-416001  │ sha256:d3dbcc │ 990B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-416001 image ls --format table --alsologtostderr:
I1217 00:37:41.091648 1248953 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:41.091814 1248953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:41.091844 1248953 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:41.091864 1248953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:41.092126 1248953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:41.092862 1248953 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:41.093032 1248953 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:41.093596 1248953 cli_runner.go:164] Run: docker container inspect functional-416001 --format={{.State.Status}}
I1217 00:37:41.110474 1248953 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:41.110538 1248953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-416001
I1217 00:37:41.127837 1248953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-416001/id_rsa Username:docker}
I1217 00:37:41.224138 1248953 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-416001 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[
],"repoTags":["docker.io/kicbase/echo-server:functional-416001"],"size":"2173567"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"
],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-416001"],"size":"990"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io
/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d33
80c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-416001 image ls --format json --alsologtostderr:
I1217 00:37:40.839136 1248873 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:40.839285 1248873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:40.839292 1248873 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:40.839297 1248873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:40.839561 1248873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:40.840210 1248873 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:40.840332 1248873 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:40.840857 1248873 cli_runner.go:164] Run: docker container inspect functional-416001 --format={{.State.Status}}
I1217 00:37:40.861792 1248873 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:40.861855 1248873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-416001
I1217 00:37:40.885446 1248873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-416001/id_rsa Username:docker}
I1217 00:37:40.994777 1248873 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-416001 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-416001
size: "2173567"
- id: sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-416001
size: "990"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-416001 image ls --format yaml --alsologtostderr:
I1217 00:37:39.836756 1248591 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:39.836972 1248591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:39.837003 1248591 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:39.837022 1248591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:39.837382 1248591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:39.838156 1248591 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:39.838357 1248591 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:39.838950 1248591 cli_runner.go:164] Run: docker container inspect functional-416001 --format={{.State.Status}}
I1217 00:37:39.865119 1248591 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:39.865189 1248591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-416001
I1217 00:37:39.894816 1248591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-416001/id_rsa Username:docker}
I1217 00:37:40.001119 1248591 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-416001 ssh pgrep buildkitd: exit status 1 (416.671136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr: (3.592593696s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr:
I1217 00:37:40.592644 1248797 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:40.598261 1248797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:40.598280 1248797 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:40.598287 1248797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:40.598599 1248797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:40.599309 1248797 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:40.601151 1248797 config.go:182] Loaded profile config "functional-416001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1217 00:37:40.601822 1248797 cli_runner.go:164] Run: docker container inspect functional-416001 --format={{.State.Status}}
I1217 00:37:40.640782 1248797 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:40.640835 1248797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-416001
I1217 00:37:40.663031 1248797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-416001/id_rsa Username:docker}
I1217 00:37:40.761970 1248797 build_images.go:162] Building image from path: /tmp/build.985410483.tar
I1217 00:37:40.762100 1248797 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:37:40.771912 1248797 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.985410483.tar
I1217 00:37:40.776870 1248797 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.985410483.tar: stat -c "%s %y" /var/lib/minikube/build/build.985410483.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.985410483.tar': No such file or directory
I1217 00:37:40.776941 1248797 ssh_runner.go:362] scp /tmp/build.985410483.tar --> /var/lib/minikube/build/build.985410483.tar (3072 bytes)
I1217 00:37:40.812401 1248797 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.985410483
I1217 00:37:40.823832 1248797 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.985410483 -xf /var/lib/minikube/build/build.985410483.tar
I1217 00:37:40.834815 1248797 containerd.go:394] Building image: /var/lib/minikube/build/build.985410483
I1217 00:37:40.834909 1248797 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.985410483 --local dockerfile=/var/lib/minikube/build/build.985410483 --output type=image,name=localhost/my-image:functional-416001
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:00705a5aee5d36c70dad2d4b9c6c675c913a5bf337415d0b962bbbe3d943b875 0.0s done
#8 exporting config sha256:e51ff3d44c1474c86d6e1619c0bceb294805be675c7e72a4dce741ee58c50b41 0.0s done
#8 naming to localhost/my-image:functional-416001 done
#8 DONE 0.2s
I1217 00:37:44.042633 1248797 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.985410483 --local dockerfile=/var/lib/minikube/build/build.985410483 --output type=image,name=localhost/my-image:functional-416001: (3.207692074s)
I1217 00:37:44.042711 1248797 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.985410483
I1217 00:37:44.053380 1248797 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.985410483.tar
I1217 00:37:44.063214 1248797 build_images.go:218] Built localhost/my-image:functional-416001 from /tmp/build.985410483.tar
I1217 00:37:44.063247 1248797 build_images.go:134] succeeded building to: functional-416001
I1217 00:37:44.063252 1248797 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-416001
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image load --daemon kicbase/echo-server:functional-416001 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image load --daemon kicbase/echo-server:functional-416001 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-416001
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image load --daemon kicbase/echo-server:functional-416001 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-416001 image load --daemon kicbase/echo-server:functional-416001 --alsologtostderr: (1.113184501s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image save kicbase/echo-server:functional-416001 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
2025/12/17 00:37:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image rm kicbase/echo-server:functional-416001 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-416001
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 image save --daemon kicbase/echo-server:functional-416001 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-416001
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-416001 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-416001
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-416001
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-416001
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:3.1: (1.082543562s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:3.3: (1.110760658s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-608344 cache add registry.k8s.io/pause:latest: (1.051835228s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2926282552/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache add minikube-local-cache-test:functional-608344
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache delete minikube-local-cache-test:functional-608344
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.905022ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1870960618/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 config get cpus: exit status 14 (89.714618ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 config get cpus: exit status 14 (77.58748ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.486175ms)

                                                
                                                
-- stdout --
	* [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:06:52.492064 1277994 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:06:52.492256 1277994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.492289 1277994 out.go:374] Setting ErrFile to fd 2...
	I1217 01:06:52.492309 1277994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.492597 1277994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:06:52.493005 1277994 out.go:368] Setting JSON to false
	I1217 01:06:52.493894 1277994 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24563,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:06:52.493996 1277994 start.go:143] virtualization:  
	I1217 01:06:52.497294 1277994 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:06:52.500342 1277994 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:06:52.500426 1277994 notify.go:221] Checking for updates...
	I1217 01:06:52.506077 1277994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:06:52.508979 1277994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:06:52.511909 1277994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:06:52.514757 1277994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:06:52.517577 1277994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:06:52.520850 1277994 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:06:52.521489 1277994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:06:52.554804 1277994 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:06:52.554924 1277994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.609501 1277994 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.600509357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.609605 1277994 docker.go:319] overlay module found
	I1217 01:06:52.612726 1277994 out.go:179] * Using the docker driver based on existing profile
	I1217 01:06:52.615534 1277994 start.go:309] selected driver: docker
	I1217 01:06:52.615550 1277994 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.615639 1277994 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:06:52.619160 1277994 out.go:203] 
	W1217 01:06:52.622010 1277994 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 01:06:52.624881 1277994 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.343117ms)

                                                
                                                
-- stdout --
	* [functional-608344] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:06:52.301188 1277948 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:06:52.301420 1277948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.301431 1277948 out.go:374] Setting ErrFile to fd 2...
	I1217 01:06:52.301436 1277948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:06:52.301919 1277948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:06:52.302335 1277948 out.go:368] Setting JSON to false
	I1217 01:06:52.303171 1277948 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24563,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:06:52.303239 1277948 start.go:143] virtualization:  
	I1217 01:06:52.306669 1277948 out.go:179] * [functional-608344] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 01:06:52.310324 1277948 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:06:52.310408 1277948 notify.go:221] Checking for updates...
	I1217 01:06:52.315903 1277948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:06:52.318662 1277948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:06:52.321469 1277948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:06:52.324252 1277948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:06:52.327122 1277948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:06:52.330588 1277948 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:06:52.331145 1277948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:06:52.359364 1277948 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:06:52.359484 1277948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:06:52.419249 1277948 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:06:52.410259118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:06:52.419370 1277948 docker.go:319] overlay module found
	I1217 01:06:52.422431 1277948 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 01:06:52.425333 1277948 start.go:309] selected driver: docker
	I1217 01:06:52.425358 1277948 start.go:927] validating driver "docker" against &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:06:52.425458 1277948 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:06:52.428935 1277948 out.go:203] 
	W1217 01:06:52.431759 1277948 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 01:06:52.434468 1277948 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh -n functional-608344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cp functional-608344:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp669975446/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh -n functional-608344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh -n functional-608344 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1211243/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /etc/test/nested/copy/1211243/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1211243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /etc/ssl/certs/1211243.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1211243.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /usr/share/ca-certificates/1211243.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12112432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /etc/ssl/certs/12112432.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12112432.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /usr/share/ca-certificates/12112432.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "sudo systemctl is-active docker": exit status 1 (262.043642ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "sudo systemctl is-active crio": exit status 1 (283.837368ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (11.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-arm64 license: (11.3587259s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (11.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-608344 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "349.09244ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "50.881779ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "317.91432ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.374813ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4153205073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.511796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 01:06:46.292957 1211243 retry.go:31] will retry after 682.659955ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4153205073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh "sudo umount -f /mount-9p": exit status 1 (261.73076ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-608344 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4153205073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-608344 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-608344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2094570414/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-608344 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-608344
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-608344
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-608344 image ls --format short --alsologtostderr:
I1217 01:07:17.575326 1280645 out.go:360] Setting OutFile to fd 1 ...
I1217 01:07:17.575529 1280645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:17.575540 1280645 out.go:374] Setting ErrFile to fd 2...
I1217 01:07:17.575546 1280645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:17.575792 1280645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:07:17.576435 1280645 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:17.576561 1280645 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:17.577105 1280645 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:07:17.594141 1280645 ssh_runner.go:195] Run: systemctl --version
I1217 01:07:17.594205 1280645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:07:17.613500 1280645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:07:17.708387 1280645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-608344 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ localhost/my-image                          │ functional-608344  │ sha256:98ff22 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-608344  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-608344  │ sha256:d3dbcc │ 990B   │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-608344 image ls --format table --alsologtostderr:
I1217 01:07:21.726249 1281038 out.go:360] Setting OutFile to fd 1 ...
I1217 01:07:21.726667 1281038 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:21.726702 1281038 out.go:374] Setting ErrFile to fd 2...
I1217 01:07:21.726723 1281038 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:21.727228 1281038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:07:21.728299 1281038 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:21.728453 1281038 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:21.729159 1281038 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:07:21.747724 1281038 ssh_runner.go:195] Run: systemctl --version
I1217 01:07:21.747784 1281038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:07:21.765608 1281038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:07:21.860366 1281038 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-608344 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-608344"],"size":"2173567"},{"id":"sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-608344"],"size":"990"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:98ff2221b87adb56f52833ed422c8ee7c7eb4b9fdd0bc044cd8cc5ac84f0bcf6","repoDigests":[],"repoTags":["localhost/my-image:functional-608344"],"size":"830617"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoT
ags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k
8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168
808"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-608344 image ls --format json --alsologtostderr:
I1217 01:07:21.503864 1281002 out.go:360] Setting OutFile to fd 1 ...
I1217 01:07:21.504011 1281002 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:21.504043 1281002 out.go:374] Setting ErrFile to fd 2...
I1217 01:07:21.504057 1281002 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:21.504339 1281002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:07:21.505024 1281002 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:21.505204 1281002 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:21.505798 1281002 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:07:21.524571 1281002 ssh_runner.go:195] Run: systemctl --version
I1217 01:07:21.524624 1281002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:07:21.546123 1281002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:07:21.640233 1281002 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-608344 image ls --format yaml --alsologtostderr:
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:d3dbccb3b82b6513d2fa489e559c69328b709fecd89b6e03487fb128f1cb5e03
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-608344
size: "990"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-608344
size: "2173567"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-608344 image ls --format yaml --alsologtostderr:
I1217 01:07:17.791004 1280689 out.go:360] Setting OutFile to fd 1 ...
I1217 01:07:17.791185 1280689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:17.791215 1280689 out.go:374] Setting ErrFile to fd 2...
I1217 01:07:17.791234 1280689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:17.791516 1280689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:07:17.792174 1280689 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:17.792353 1280689 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:17.792922 1280689 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:07:17.810740 1280689 ssh_runner.go:195] Run: systemctl --version
I1217 01:07:17.810805 1280689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:07:17.827965 1280689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:07:17.920255 1280689 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-608344 ssh pgrep buildkitd: exit status 1 (266.013269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image build -t localhost/my-image:functional-608344 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-608344 image build -t localhost/my-image:functional-608344 testdata/build --alsologtostderr: (3.002658977s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-608344 image build -t localhost/my-image:functional-608344 testdata/build --alsologtostderr:
I1217 01:07:18.271569 1280785 out.go:360] Setting OutFile to fd 1 ...
I1217 01:07:18.271744 1280785 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:18.271755 1280785 out.go:374] Setting ErrFile to fd 2...
I1217 01:07:18.271761 1280785 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:07:18.272015 1280785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 01:07:18.272616 1280785 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:18.273294 1280785 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 01:07:18.273879 1280785 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 01:07:18.291527 1280785 ssh_runner.go:195] Run: systemctl --version
I1217 01:07:18.291585 1280785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 01:07:18.308573 1280785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 01:07:18.404579 1280785 build_images.go:162] Building image from path: /tmp/build.2802204863.tar
I1217 01:07:18.404659 1280785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 01:07:18.415182 1280785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2802204863.tar
I1217 01:07:18.419117 1280785 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2802204863.tar: stat -c "%s %y" /var/lib/minikube/build/build.2802204863.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2802204863.tar': No such file or directory
I1217 01:07:18.419147 1280785 ssh_runner.go:362] scp /tmp/build.2802204863.tar --> /var/lib/minikube/build/build.2802204863.tar (3072 bytes)
I1217 01:07:18.442000 1280785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2802204863
I1217 01:07:18.449828 1280785 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2802204863 -xf /var/lib/minikube/build/build.2802204863.tar
I1217 01:07:18.458129 1280785 containerd.go:394] Building image: /var/lib/minikube/build/build.2802204863
I1217 01:07:18.458201 1280785 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2802204863 --local dockerfile=/var/lib/minikube/build/build.2802204863 --output type=image,name=localhost/my-image:functional-608344
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0dc4a9a95e73a2c807b6fa5cd4b58e14086b7d4b7ccc039d97e37d9a7ddef86a 0.0s done
#8 exporting config sha256:98ff2221b87adb56f52833ed422c8ee7c7eb4b9fdd0bc044cd8cc5ac84f0bcf6 0.0s done
#8 naming to localhost/my-image:functional-608344 done
#8 DONE 0.2s
I1217 01:07:21.198844 1280785 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2802204863 --local dockerfile=/var/lib/minikube/build/build.2802204863 --output type=image,name=localhost/my-image:functional-608344: (2.740610868s)
I1217 01:07:21.198927 1280785 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2802204863
I1217 01:07:21.206862 1280785 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2802204863.tar
I1217 01:07:21.214318 1280785 build_images.go:218] Built localhost/my-image:functional-608344 from /tmp/build.2802204863.tar
I1217 01:07:21.214349 1280785 build_images.go:134] succeeded building to: functional-608344
I1217 01:07:21.214354 1280785 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-608344
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image load --daemon kicbase/echo-server:functional-608344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image save kicbase/echo-server:functional-608344 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image rm kicbase/echo-server:functional-608344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-608344
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 image save --daemon kicbase/echo-server:functional-608344 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-608344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-608344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (183.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1217 01:09:50.423065 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.429365 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.440694 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.461989 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.503329 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.584669 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:50.746096 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:51.067653 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:51.709636 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:52.991094 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:09:55.553780 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:10:00.675432 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:10:09.433260 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:10:10.917412 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:10:31.399496 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:11:12.361089 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m2.6060317s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
E1217 01:11:56.877863 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (183.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 kubectl -- rollout status deployment/busybox: (5.39836496s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-cl9qz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-q2js8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-vnp6m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-cl9qz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-q2js8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-vnp6m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-cl9qz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-q2js8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-vnp6m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-cl9qz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-cl9qz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-q2js8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-q2js8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-vnp6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 kubectl -- exec busybox-7b57f96db7-vnp6m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node add --alsologtostderr -v 5
E1217 01:12:34.284056 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 node add --alsologtostderr -v 5: (58.579656856s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5: (1.052603386s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-702663 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.028381883s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 status --output json --alsologtostderr -v 5: (1.018708496s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp testdata/cp-test.txt ha-702663:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1613966607/001/cp-test_ha-702663.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663:/home/docker/cp-test.txt ha-702663-m02:/home/docker/cp-test_ha-702663_ha-702663-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test_ha-702663_ha-702663-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663:/home/docker/cp-test.txt ha-702663-m03:/home/docker/cp-test_ha-702663_ha-702663-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test_ha-702663_ha-702663-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663:/home/docker/cp-test.txt ha-702663-m04:/home/docker/cp-test_ha-702663_ha-702663-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test_ha-702663_ha-702663-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp testdata/cp-test.txt ha-702663-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1613966607/001/cp-test_ha-702663-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m02:/home/docker/cp-test.txt ha-702663:/home/docker/cp-test_ha-702663-m02_ha-702663.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test_ha-702663-m02_ha-702663.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m02:/home/docker/cp-test.txt ha-702663-m03:/home/docker/cp-test_ha-702663-m02_ha-702663-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test_ha-702663-m02_ha-702663-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m02:/home/docker/cp-test.txt ha-702663-m04:/home/docker/cp-test_ha-702663-m02_ha-702663-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test_ha-702663-m02_ha-702663-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp testdata/cp-test.txt ha-702663-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1613966607/001/cp-test_ha-702663-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m03:/home/docker/cp-test.txt ha-702663:/home/docker/cp-test_ha-702663-m03_ha-702663.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test_ha-702663-m03_ha-702663.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m03:/home/docker/cp-test.txt ha-702663-m02:/home/docker/cp-test_ha-702663-m03_ha-702663-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test_ha-702663-m03_ha-702663-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m03:/home/docker/cp-test.txt ha-702663-m04:/home/docker/cp-test_ha-702663-m03_ha-702663-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test_ha-702663-m03_ha-702663-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp testdata/cp-test.txt ha-702663-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1613966607/001/cp-test_ha-702663-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m04:/home/docker/cp-test.txt ha-702663:/home/docker/cp-test_ha-702663-m04_ha-702663.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663 "sudo cat /home/docker/cp-test_ha-702663-m04_ha-702663.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m04:/home/docker/cp-test.txt ha-702663-m02:/home/docker/cp-test_ha-702663-m04_ha-702663-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m02 "sudo cat /home/docker/cp-test_ha-702663-m04_ha-702663-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 cp ha-702663-m04:/home/docker/cp-test.txt ha-702663-m03:/home/docker/cp-test_ha-702663-m04_ha-702663-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 ssh -n ha-702663-m03 "sudo cat /home/docker/cp-test_ha-702663-m04_ha-702663-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 node stop m02 --alsologtostderr -v 5: (12.188048749s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5: exit status 7 (800.804146ms)

                                                
                                                
-- stdout --
	ha-702663
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-702663-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-702663-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-702663-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:13:40.802865 1298389 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:13:40.803100 1298389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:13:40.803133 1298389 out.go:374] Setting ErrFile to fd 2...
	I1217 01:13:40.803151 1298389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:13:40.803405 1298389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:13:40.803613 1298389 out.go:368] Setting JSON to false
	I1217 01:13:40.803687 1298389 mustload.go:66] Loading cluster: ha-702663
	I1217 01:13:40.803780 1298389 notify.go:221] Checking for updates...
	I1217 01:13:40.804144 1298389 config.go:182] Loaded profile config "ha-702663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:13:40.804191 1298389 status.go:174] checking status of ha-702663 ...
	I1217 01:13:40.804755 1298389 cli_runner.go:164] Run: docker container inspect ha-702663 --format={{.State.Status}}
	I1217 01:13:40.824865 1298389 status.go:371] ha-702663 host status = "Running" (err=<nil>)
	I1217 01:13:40.824887 1298389 host.go:66] Checking if "ha-702663" exists ...
	I1217 01:13:40.825220 1298389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-702663
	I1217 01:13:40.878947 1298389 host.go:66] Checking if "ha-702663" exists ...
	I1217 01:13:40.879268 1298389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:40.879310 1298389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-702663
	I1217 01:13:40.904996 1298389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/ha-702663/id_rsa Username:docker}
	I1217 01:13:41.009895 1298389 ssh_runner.go:195] Run: systemctl --version
	I1217 01:13:41.017630 1298389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:13:41.032773 1298389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:13:41.100477 1298389 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-17 01:13:41.088750234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:13:41.101015 1298389 kubeconfig.go:125] found "ha-702663" server: "https://192.168.49.254:8443"
	I1217 01:13:41.101066 1298389 api_server.go:166] Checking apiserver status ...
	I1217 01:13:41.101121 1298389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:13:41.114915 1298389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	I1217 01:13:41.123317 1298389 api_server.go:182] apiserver freezer: "11:freezer:/docker/77570683a44f721986ffcaa300ada4dd721c9381104b7d7e3394d5ca3b6a3188/kubepods/burstable/poda796a5b8502b9db2c6fc6664cdd6298d/ed74a4054953820b4e1bec4a43624fe4100662080ac0db929b2fd362ff8a6e40"
	I1217 01:13:41.123388 1298389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/77570683a44f721986ffcaa300ada4dd721c9381104b7d7e3394d5ca3b6a3188/kubepods/burstable/poda796a5b8502b9db2c6fc6664cdd6298d/ed74a4054953820b4e1bec4a43624fe4100662080ac0db929b2fd362ff8a6e40/freezer.state
	I1217 01:13:41.131777 1298389 api_server.go:204] freezer state: "THAWED"
	I1217 01:13:41.131808 1298389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:13:41.140171 1298389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:13:41.140205 1298389 status.go:463] ha-702663 apiserver status = Running (err=<nil>)
	I1217 01:13:41.140218 1298389 status.go:176] ha-702663 status: &{Name:ha-702663 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:13:41.140236 1298389 status.go:174] checking status of ha-702663-m02 ...
	I1217 01:13:41.140568 1298389 cli_runner.go:164] Run: docker container inspect ha-702663-m02 --format={{.State.Status}}
	I1217 01:13:41.160124 1298389 status.go:371] ha-702663-m02 host status = "Stopped" (err=<nil>)
	I1217 01:13:41.160150 1298389 status.go:384] host is not running, skipping remaining checks
	I1217 01:13:41.160158 1298389 status.go:176] ha-702663-m02 status: &{Name:ha-702663-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:13:41.160178 1298389 status.go:174] checking status of ha-702663-m03 ...
	I1217 01:13:41.160492 1298389 cli_runner.go:164] Run: docker container inspect ha-702663-m03 --format={{.State.Status}}
	I1217 01:13:41.179609 1298389 status.go:371] ha-702663-m03 host status = "Running" (err=<nil>)
	I1217 01:13:41.179638 1298389 host.go:66] Checking if "ha-702663-m03" exists ...
	I1217 01:13:41.179931 1298389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-702663-m03
	I1217 01:13:41.203331 1298389 host.go:66] Checking if "ha-702663-m03" exists ...
	I1217 01:13:41.203697 1298389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:41.203745 1298389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-702663-m03
	I1217 01:13:41.222144 1298389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/ha-702663-m03/id_rsa Username:docker}
	I1217 01:13:41.315181 1298389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:13:41.328448 1298389 kubeconfig.go:125] found "ha-702663" server: "https://192.168.49.254:8443"
	I1217 01:13:41.328474 1298389 api_server.go:166] Checking apiserver status ...
	I1217 01:13:41.328515 1298389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:13:41.347283 1298389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	I1217 01:13:41.356757 1298389 api_server.go:182] apiserver freezer: "11:freezer:/docker/32d5fe30f779b9f8cf1a000e3d6df0eb8792313f3b168654bd756c5f8ea30109/kubepods/burstable/pod06ab5f603a5a5b6eba153b8f7bdb4ec1/2af175bab045937fc0a37c3dc69ae7f399a21bee63d783e040f772c4ca7d851a"
	I1217 01:13:41.356840 1298389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/32d5fe30f779b9f8cf1a000e3d6df0eb8792313f3b168654bd756c5f8ea30109/kubepods/burstable/pod06ab5f603a5a5b6eba153b8f7bdb4ec1/2af175bab045937fc0a37c3dc69ae7f399a21bee63d783e040f772c4ca7d851a/freezer.state
	I1217 01:13:41.364673 1298389 api_server.go:204] freezer state: "THAWED"
	I1217 01:13:41.364713 1298389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:13:41.372814 1298389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:13:41.372843 1298389 status.go:463] ha-702663-m03 apiserver status = Running (err=<nil>)
	I1217 01:13:41.372852 1298389 status.go:176] ha-702663-m03 status: &{Name:ha-702663-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:13:41.372870 1298389 status.go:174] checking status of ha-702663-m04 ...
	I1217 01:13:41.373217 1298389 cli_runner.go:164] Run: docker container inspect ha-702663-m04 --format={{.State.Status}}
	I1217 01:13:41.390804 1298389 status.go:371] ha-702663-m04 host status = "Running" (err=<nil>)
	I1217 01:13:41.390832 1298389 host.go:66] Checking if "ha-702663-m04" exists ...
	I1217 01:13:41.391155 1298389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-702663-m04
	I1217 01:13:41.409876 1298389 host.go:66] Checking if "ha-702663-m04" exists ...
	I1217 01:13:41.410195 1298389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:41.410246 1298389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-702663-m04
	I1217 01:13:41.430900 1298389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/ha-702663-m04/id_rsa Username:docker}
	I1217 01:13:41.523136 1298389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:13:41.536423 1298389 status.go:176] ha-702663-m04 status: &{Name:ha-702663-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 node start m02 --alsologtostderr -v 5: (12.563191069s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5: (1.41776376s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.120274429s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 stop --alsologtostderr -v 5: (37.548064694s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 start --wait true --alsologtostderr -v 5
E1217 01:14:50.423518 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:14:59.945595 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:15:09.433840 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:15:18.126384 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 start --wait true --alsologtostderr -v 5: (1m0.700887602s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 node delete m03 --alsologtostderr -v 5: (9.99388535s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 stop --alsologtostderr -v 5: (36.555618873s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5: exit status 7 (116.339627ms)

                                                
                                                
-- stdout --
	ha-702663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-702663-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-702663-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:16:24.447848 1313284 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:16:24.447973 1313284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:16:24.447984 1313284 out.go:374] Setting ErrFile to fd 2...
	I1217 01:16:24.447989 1313284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:16:24.448253 1313284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:16:24.448437 1313284 out.go:368] Setting JSON to false
	I1217 01:16:24.448472 1313284 mustload.go:66] Loading cluster: ha-702663
	I1217 01:16:24.448580 1313284 notify.go:221] Checking for updates...
	I1217 01:16:24.448915 1313284 config.go:182] Loaded profile config "ha-702663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:16:24.448941 1313284 status.go:174] checking status of ha-702663 ...
	I1217 01:16:24.449847 1313284 cli_runner.go:164] Run: docker container inspect ha-702663 --format={{.State.Status}}
	I1217 01:16:24.467734 1313284 status.go:371] ha-702663 host status = "Stopped" (err=<nil>)
	I1217 01:16:24.467760 1313284 status.go:384] host is not running, skipping remaining checks
	I1217 01:16:24.467767 1313284 status.go:176] ha-702663 status: &{Name:ha-702663 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:16:24.467802 1313284 status.go:174] checking status of ha-702663-m02 ...
	I1217 01:16:24.468123 1313284 cli_runner.go:164] Run: docker container inspect ha-702663-m02 --format={{.State.Status}}
	I1217 01:16:24.491261 1313284 status.go:371] ha-702663-m02 host status = "Stopped" (err=<nil>)
	I1217 01:16:24.491285 1313284 status.go:384] host is not running, skipping remaining checks
	I1217 01:16:24.491293 1313284 status.go:176] ha-702663-m02 status: &{Name:ha-702663-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:16:24.491313 1313284 status.go:174] checking status of ha-702663-m04 ...
	I1217 01:16:24.491605 1313284 cli_runner.go:164] Run: docker container inspect ha-702663-m04 --format={{.State.Status}}
	I1217 01:16:24.512922 1313284 status.go:371] ha-702663-m04 host status = "Stopped" (err=<nil>)
	I1217 01:16:24.512945 1313284 status.go:384] host is not running, skipping remaining checks
	I1217 01:16:24.512952 1313284 status.go:176] ha-702663-m04 status: &{Name:ha-702663-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1217 01:16:56.877331 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.57923597s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 node add --control-plane --alsologtostderr -v 5: (1m19.474818774s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-702663 status --alsologtostderr -v 5: (1.076118575s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090567657s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-424858 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-424858 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (48.362672503s)
--- PASS: TestJSONOutput/start/Command (48.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-424858 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-424858 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-424858 --output=json --user=testUser
E1217 01:19:50.423346 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-424858 --output=json --user=testUser: (6.019811864s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-810075 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-810075 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (102.111486ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4de8276-6605-4c3b-86ac-e3e8f76ec2d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-810075] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"08798ba8-8405-49f1-8e86-95faec437640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"bc8de33a-0d9d-4356-acfe-64cf9ac32a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c329faf0-760f-447c-93be-255b78addb40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig"}}
	{"specversion":"1.0","id":"d17a0c81-508e-47f9-a547-50db3da16f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube"}}
	{"specversion":"1.0","id":"f916a045-a5e7-406c-b3a0-952f0e93fd11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f0d12151-9927-4133-869d-9a5a114296b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dc7adfe6-d207-4e14-a814-00c995874da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-810075" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-810075
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-261569 --network=
E1217 01:20:09.433356 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-261569 --network=: (40.611545743s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-261569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-261569
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-261569: (2.193358386s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-090195 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-090195 --network=bridge: (33.772404105s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-090195" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-090195
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-090195: (2.101304149s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.90s)

                                                
                                    
x
+
TestKicExistingNetwork (33.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 01:21:15.650996 1211243 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 01:21:15.667274 1211243 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 01:21:15.667355 1211243 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 01:21:15.667376 1211243 cli_runner.go:164] Run: docker network inspect existing-network
W1217 01:21:15.683704 1211243 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 01:21:15.683734 1211243 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 01:21:15.683754 1211243 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 01:21:15.683877 1211243 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:21:15.701163 1211243 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d3df4750b8cc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:2b:39:f5:d5:bc} reservation:<nil>}
I1217 01:21:15.701488 1211243 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001497160}
I1217 01:21:15.701528 1211243 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 01:21:15.701581 1211243 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 01:21:15.767044 1211243 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-072331 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-072331 --network=existing-network: (30.831295682s)
helpers_test.go:176: Cleaning up "existing-network-072331" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-072331
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-072331: (2.134503918s)
I1217 01:21:48.749377 1211243 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.11s)

                                                
                                    
x
+
TestKicCustomSubnet (36.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-047472 --subnet=192.168.60.0/24
E1217 01:21:56.876752 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-047472 --subnet=192.168.60.0/24: (34.45128651s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-047472 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-047472" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-047472
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-047472: (2.195525416s)
--- PASS: TestKicCustomSubnet (36.67s)

                                                
                                    
x
+
TestKicStaticIP (35.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-693232 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-693232 --static-ip=192.168.200.200: (33.614720047s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-693232 ip
helpers_test.go:176: Cleaning up "static-ip-693232" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-693232
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-693232: (2.178208671s)
--- PASS: TestKicStaticIP (35.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-965024 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-965024 --driver=docker  --container-runtime=containerd: (33.207304138s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-968019 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-968019 --driver=docker  --container-runtime=containerd: (31.592374217s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-965024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-968019
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-968019" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-968019
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-968019: (2.068775223s)
helpers_test.go:176: Cleaning up "first-965024" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-965024
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-965024: (2.075720639s)
--- PASS: TestMinikubeProfile (70.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-106735 --memory=3072 --mount-string /tmp/TestMountStartserial2003091915/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-106735 --memory=3072 --mount-string /tmp/TestMountStartserial2003091915/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.144051132s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-106735 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-108581 --memory=3072 --mount-string /tmp/TestMountStartserial2003091915/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-108581 --memory=3072 --mount-string /tmp/TestMountStartserial2003091915/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.096273479s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-108581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-106735 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-106735 --alsologtostderr -v=5: (1.744541747s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-108581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-108581
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-108581: (1.295865104s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-108581
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-108581: (6.509989888s)
--- PASS: TestMountStart/serial/RestartStopped (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-108581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-294395 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1217 01:24:50.423190 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:24:52.513019 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:25:09.433274 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:26:13.487736 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-294395 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m46.443549649s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-294395 -- rollout status deployment/busybox: (2.800963275s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-9cs2b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-kcbd4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-9cs2b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-kcbd4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-9cs2b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-kcbd4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-9cs2b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-9cs2b -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-kcbd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-294395 -- exec busybox-7b57f96db7-kcbd4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-294395 -v=5 --alsologtostderr
E1217 01:26:56.877801 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-294395 -v=5 --alsologtostderr: (26.876071507s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-294395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp testdata/cp-test.txt multinode-294395:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2826728569/001/cp-test_multinode-294395.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395:/home/docker/cp-test.txt multinode-294395-m02:/home/docker/cp-test_multinode-294395_multinode-294395-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test_multinode-294395_multinode-294395-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395:/home/docker/cp-test.txt multinode-294395-m03:/home/docker/cp-test_multinode-294395_multinode-294395-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test_multinode-294395_multinode-294395-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp testdata/cp-test.txt multinode-294395-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2826728569/001/cp-test_multinode-294395-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m02:/home/docker/cp-test.txt multinode-294395:/home/docker/cp-test_multinode-294395-m02_multinode-294395.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test_multinode-294395-m02_multinode-294395.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m02:/home/docker/cp-test.txt multinode-294395-m03:/home/docker/cp-test_multinode-294395-m02_multinode-294395-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test_multinode-294395-m02_multinode-294395-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp testdata/cp-test.txt multinode-294395-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2826728569/001/cp-test_multinode-294395-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m03:/home/docker/cp-test.txt multinode-294395:/home/docker/cp-test_multinode-294395-m03_multinode-294395.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395 "sudo cat /home/docker/cp-test_multinode-294395-m03_multinode-294395.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 cp multinode-294395-m03:/home/docker/cp-test.txt multinode-294395-m02:/home/docker/cp-test_multinode-294395-m03_multinode-294395-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 ssh -n multinode-294395-m02 "sudo cat /home/docker/cp-test_multinode-294395-m03_multinode-294395-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-294395 node stop m03: (1.302405646s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-294395 status: exit status 7 (544.876119ms)

                                                
                                                
-- stdout --
	multinode-294395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-294395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-294395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr: exit status 7 (548.492973ms)

                                                
                                                
-- stdout --
	multinode-294395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-294395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-294395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:27:14.818923 1366419 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:27:14.819084 1366419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:27:14.819095 1366419 out.go:374] Setting ErrFile to fd 2...
	I1217 01:27:14.819101 1366419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:27:14.819352 1366419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:27:14.819537 1366419 out.go:368] Setting JSON to false
	I1217 01:27:14.819567 1366419 mustload.go:66] Loading cluster: multinode-294395
	I1217 01:27:14.819625 1366419 notify.go:221] Checking for updates...
	I1217 01:27:14.820039 1366419 config.go:182] Loaded profile config "multinode-294395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:27:14.820064 1366419 status.go:174] checking status of multinode-294395 ...
	I1217 01:27:14.820920 1366419 cli_runner.go:164] Run: docker container inspect multinode-294395 --format={{.State.Status}}
	I1217 01:27:14.840675 1366419 status.go:371] multinode-294395 host status = "Running" (err=<nil>)
	I1217 01:27:14.840700 1366419 host.go:66] Checking if "multinode-294395" exists ...
	I1217 01:27:14.841001 1366419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-294395
	I1217 01:27:14.868313 1366419 host.go:66] Checking if "multinode-294395" exists ...
	I1217 01:27:14.868631 1366419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:27:14.868684 1366419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-294395
	I1217 01:27:14.888184 1366419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/multinode-294395/id_rsa Username:docker}
	I1217 01:27:14.983162 1366419 ssh_runner.go:195] Run: systemctl --version
	I1217 01:27:14.989715 1366419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:27:15.019901 1366419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:27:15.091920 1366419 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 01:27:15.079241417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:27:15.092529 1366419 kubeconfig.go:125] found "multinode-294395" server: "https://192.168.67.2:8443"
	I1217 01:27:15.092572 1366419 api_server.go:166] Checking apiserver status ...
	I1217 01:27:15.092625 1366419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:27:15.106373 1366419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup
	I1217 01:27:15.115812 1366419 api_server.go:182] apiserver freezer: "11:freezer:/docker/71ec00f4c87a2676247e31c9387bf316389e9df7db03079327ebb992ccc74937/kubepods/burstable/pod0e958c53d9a9674a81193ce6d0b19aba/fddae17a2dde0728171f4dae4c7f984e778c73a8b586004165ae428e0fd29044"
	I1217 01:27:15.115884 1366419 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/71ec00f4c87a2676247e31c9387bf316389e9df7db03079327ebb992ccc74937/kubepods/burstable/pod0e958c53d9a9674a81193ce6d0b19aba/fddae17a2dde0728171f4dae4c7f984e778c73a8b586004165ae428e0fd29044/freezer.state
	I1217 01:27:15.124252 1366419 api_server.go:204] freezer state: "THAWED"
	I1217 01:27:15.124293 1366419 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 01:27:15.132626 1366419 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 01:27:15.132656 1366419 status.go:463] multinode-294395 apiserver status = Running (err=<nil>)
	I1217 01:27:15.132667 1366419 status.go:176] multinode-294395 status: &{Name:multinode-294395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:27:15.132714 1366419 status.go:174] checking status of multinode-294395-m02 ...
	I1217 01:27:15.133069 1366419 cli_runner.go:164] Run: docker container inspect multinode-294395-m02 --format={{.State.Status}}
	I1217 01:27:15.151058 1366419 status.go:371] multinode-294395-m02 host status = "Running" (err=<nil>)
	I1217 01:27:15.151084 1366419 host.go:66] Checking if "multinode-294395-m02" exists ...
	I1217 01:27:15.151401 1366419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-294395-m02
	I1217 01:27:15.168540 1366419 host.go:66] Checking if "multinode-294395-m02" exists ...
	I1217 01:27:15.168878 1366419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:27:15.168928 1366419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-294395-m02
	I1217 01:27:15.187268 1366419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34073 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/multinode-294395-m02/id_rsa Username:docker}
	I1217 01:27:15.278827 1366419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:27:15.291811 1366419 status.go:176] multinode-294395-m02 status: &{Name:multinode-294395-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:27:15.291844 1366419 status.go:174] checking status of multinode-294395-m03 ...
	I1217 01:27:15.292190 1366419 cli_runner.go:164] Run: docker container inspect multinode-294395-m03 --format={{.State.Status}}
	I1217 01:27:15.310108 1366419 status.go:371] multinode-294395-m03 host status = "Stopped" (err=<nil>)
	I1217 01:27:15.310132 1366419 status.go:384] host is not running, skipping remaining checks
	I1217 01:27:15.310139 1366419 status.go:176] multinode-294395-m03 status: &{Name:multinode-294395-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-294395 node start m03 -v=5 --alsologtostderr: (7.195631912s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-294395
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-294395
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-294395: (25.173339678s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-294395 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-294395 --wait=true -v=5 --alsologtostderr: (52.787492681s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-294395
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-294395 node delete m03: (4.944190485s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-294395 stop: (24.038137826s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-294395 status: exit status 7 (103.248057ms)

                                                
                                                
-- stdout --
	multinode-294395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-294395-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr: exit status 7 (95.76419ms)

                                                
                                                
-- stdout --
	multinode-294395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-294395-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:29:11.175994 1375197 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:29:11.176148 1375197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:29:11.176161 1375197 out.go:374] Setting ErrFile to fd 2...
	I1217 01:29:11.176168 1375197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:29:11.176574 1375197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:29:11.176850 1375197 out.go:368] Setting JSON to false
	I1217 01:29:11.176899 1375197 mustload.go:66] Loading cluster: multinode-294395
	I1217 01:29:11.178178 1375197 notify.go:221] Checking for updates...
	I1217 01:29:11.178258 1375197 config.go:182] Loaded profile config "multinode-294395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:29:11.178301 1375197 status.go:174] checking status of multinode-294395 ...
	I1217 01:29:11.179192 1375197 cli_runner.go:164] Run: docker container inspect multinode-294395 --format={{.State.Status}}
	I1217 01:29:11.198448 1375197 status.go:371] multinode-294395 host status = "Stopped" (err=<nil>)
	I1217 01:29:11.198469 1375197 status.go:384] host is not running, skipping remaining checks
	I1217 01:29:11.198476 1375197 status.go:176] multinode-294395 status: &{Name:multinode-294395 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:29:11.198510 1375197 status.go:174] checking status of multinode-294395-m02 ...
	I1217 01:29:11.198830 1375197 cli_runner.go:164] Run: docker container inspect multinode-294395-m02 --format={{.State.Status}}
	I1217 01:29:11.219329 1375197 status.go:371] multinode-294395-m02 host status = "Stopped" (err=<nil>)
	I1217 01:29:11.219354 1375197 status.go:384] host is not running, skipping remaining checks
	I1217 01:29:11.219361 1375197 status.go:176] multinode-294395-m02 status: &{Name:multinode-294395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-294395 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1217 01:29:50.423217 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-294395 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (57.079036939s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-294395 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-294395
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-294395-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-294395-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.812601ms)

                                                
                                                
-- stdout --
	* [multinode-294395-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-294395-m02' is duplicated with machine name 'multinode-294395-m02' in profile 'multinode-294395'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-294395-m03 --driver=docker  --container-runtime=containerd
E1217 01:30:09.433347 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-294395-m03 --driver=docker  --container-runtime=containerd: (34.15822444s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-294395
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-294395: exit status 80 (335.674523ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-294395 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-294395-m03 already exists in multinode-294395-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-294395-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-294395-m03: (2.153193929s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.80s)

                                                
                                    
x
+
TestPreload (120.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-016479 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1217 01:31:39.947734 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-016479 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (55.861998454s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-016479 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-016479 image pull gcr.io/k8s-minikube/busybox: (2.2580209s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-016479
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-016479: (5.944405183s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-016479 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1217 01:31:56.876717 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-016479 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.00893603s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-016479 image list
helpers_test.go:176: Cleaning up "test-preload-016479" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-016479
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-016479: (2.398559231s)
--- PASS: TestPreload (120.71s)

                                                
                                    
x
+
TestScheduledStopUnix (109s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-557771 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-557771 --memory=3072 --driver=docker  --container-runtime=containerd: (32.818795921s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-557771 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:33:23.624430 1391027 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:33:23.624921 1391027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:23.624937 1391027 out.go:374] Setting ErrFile to fd 2...
	I1217 01:33:23.624944 1391027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:23.625320 1391027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:33:23.625886 1391027 out.go:368] Setting JSON to false
	I1217 01:33:23.626070 1391027 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:23.626483 1391027 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:33:23.626581 1391027 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/config.json ...
	I1217 01:33:23.626826 1391027 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:23.627008 1391027 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-557771 -n scheduled-stop-557771
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-557771 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:33:24.099216 1391118 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:33:24.099313 1391118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:24.099319 1391118 out.go:374] Setting ErrFile to fd 2...
	I1217 01:33:24.099324 1391118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:24.099626 1391118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:33:24.099912 1391118 out.go:368] Setting JSON to false
	I1217 01:33:24.100696 1391118 daemonize_unix.go:73] killing process 1391043 as it is an old scheduled stop
	I1217 01:33:24.100813 1391118 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:24.101216 1391118 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:33:24.101297 1391118 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/config.json ...
	I1217 01:33:24.101552 1391118 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:24.101701 1391118 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 01:33:24.109842 1211243 retry.go:31] will retry after 118.8µs: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.110346 1211243 retry.go:31] will retry after 171.774µs: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.111504 1211243 retry.go:31] will retry after 331.126µs: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.112743 1211243 retry.go:31] will retry after 465.132µs: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.113879 1211243 retry.go:31] will retry after 268.173µs: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.115974 1211243 retry.go:31] will retry after 1.037196ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.117410 1211243 retry.go:31] will retry after 1.657308ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.119579 1211243 retry.go:31] will retry after 2.299163ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.122935 1211243 retry.go:31] will retry after 3.621604ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.127223 1211243 retry.go:31] will retry after 5.446595ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.133989 1211243 retry.go:31] will retry after 8.107681ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.142245 1211243 retry.go:31] will retry after 8.757853ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.151443 1211243 retry.go:31] will retry after 16.473894ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.168753 1211243 retry.go:31] will retry after 24.573081ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.194009 1211243 retry.go:31] will retry after 18.013849ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
I1217 01:33:24.212181 1211243 retry.go:31] will retry after 63.654745ms: open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-557771 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-557771 -n scheduled-stop-557771
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-557771
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-557771 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:33:50.074989 1391803 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:33:50.075198 1391803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:50.075208 1391803 out.go:374] Setting ErrFile to fd 2...
	I1217 01:33:50.075213 1391803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:33:50.075496 1391803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:33:50.075778 1391803 out.go:368] Setting JSON to false
	I1217 01:33:50.075882 1391803 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:50.076263 1391803 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1217 01:33:50.076343 1391803 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/scheduled-stop-557771/config.json ...
	I1217 01:33:50.076548 1391803 mustload.go:66] Loading cluster: scheduled-stop-557771
	I1217 01:33:50.076678 1391803 config.go:182] Loaded profile config "scheduled-stop-557771": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-557771
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-557771: exit status 7 (70.225778ms)

                                                
                                                
-- stdout --
	scheduled-stop-557771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-557771 -n scheduled-stop-557771
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-557771 -n scheduled-stop-557771: exit status 7 (71.94286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-557771" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-557771
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-557771: (4.519068805s)
--- PASS: TestScheduledStopUnix (109.00s)

                                                
                                    
x
+
TestInsufficientStorage (12.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-475876 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-475876 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.795481631s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32fed3df-fbdc-4e1d-b0a3-17eda92868e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-475876] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f8c36cf-8b68-4498-a515-4237db304ac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"6f5fff5d-dec4-494b-b96b-7fe73a841d79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b44efeac-59a0-4556-886b-7b285cdfdd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig"}}
	{"specversion":"1.0","id":"eed36125-1948-49a9-890c-4e932ebd0c28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube"}}
	{"specversion":"1.0","id":"8edbf3e6-6a96-4baf-849a-3add47cd9d71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5e048477-a9a3-42c7-bea7-27f183dd2f69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2a549ca8-aac3-472c-acbc-359710c307ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f9319661-34e6-40e8-82b9-347608d9345e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6a65b689-955d-4b5a-ba3f-0860d96df24a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c87f2202-5c49-477f-9ae6-1e593203304c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d08744e6-9596-4235-8f18-876003d452b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-475876\" primary control-plane node in \"insufficient-storage-475876\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a91389e3-5b75-4b3d-9248-3eb4d45e796f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e00645b2-1bca-481b-8e74-8073acf1690f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"557e512b-8422-49a8-accb-b03ce807b103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-475876 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-475876 --output=json --layout=cluster: exit status 7 (293.364297ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-475876","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-475876","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:34:49.826970 1393628 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-475876" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-475876 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-475876 --output=json --layout=cluster: exit status 7 (312.928292ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-475876","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-475876","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:34:50.138477 1393693 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-475876" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
	E1217 01:34:50.150248 1393693 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/insufficient-storage-475876/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-475876" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-475876
E1217 01:34:50.422694 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-475876: (1.946256831s)
--- PASS: TestInsufficientStorage (12.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1571339725 start -p running-upgrade-688005 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1571339725 start -p running-upgrade-688005 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.089455543s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-688005 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 01:42:53.489794 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-688005 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.414858596s)
helpers_test.go:176: Cleaning up "running-upgrade-688005" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-688005
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-688005: (2.307727329s)
--- PASS: TestRunningBinaryUpgrade (60.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (138.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1157146547 start -p missing-upgrade-040509 --memory=3072 --driver=docker  --container-runtime=containerd
E1217 01:35:09.433812 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1157146547 start -p missing-upgrade-040509 --memory=3072 --driver=docker  --container-runtime=containerd: (1m3.801924987s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-040509
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-040509
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-040509 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-040509 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.009785638s)
helpers_test.go:176: Cleaning up "missing-upgrade-040509" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-040509
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-040509: (2.075743423s)
--- PASS: TestMissingContainerUpgrade (138.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (97.889384ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-241463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241463 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241463 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.581669669s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-241463 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.26139561s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-241463 status -o json
no_kubernetes_test.go:226: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-241463 status -o json: exit status 2 (413.050515ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-241463","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-241463
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-241463: (2.229833239s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:162: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:162: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241463 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.995427784s)
--- PASS: TestNoKubernetes/serial/Start (9.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-241463 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-241463 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.589323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:195: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:205: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:184: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-241463
no_kubernetes_test.go:184: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-241463: (1.386733954s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:217: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-241463 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:217: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-241463 --driver=docker  --container-runtime=containerd: (6.899675446s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-241463 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-241463 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.433649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2537079702 start -p stopped-upgrade-675118 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2537079702 start -p stopped-upgrade-675118 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.477527352s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2537079702 -p stopped-upgrade-675118 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2537079702 -p stopped-upgrade-675118 stop: (1.274680032s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-675118 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 01:39:50.423204 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:09.433526 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:41:32.514821 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:41:56.877726 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-675118 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m30.257230501s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-675118
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-675118: (2.05770005s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.06s)

                                                
                                    
x
+
TestPause/serial/Start (47.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-580096 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-580096 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (47.291460312s)
--- PASS: TestPause/serial/Start (47.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-580096 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-580096 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.085127131s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-580096 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-580096 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-580096 --output=json --layout=cluster: exit status 2 (333.26276ms)

                                                
                                                
-- stdout --
	{"Name":"pause-580096","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-580096","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-580096 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-580096 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-580096 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-580096 --alsologtostderr -v=5: (2.749264665s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-580096
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-580096: exit status 1 (19.915659ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-580096: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-721629 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-721629 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (220.193247ms)

                                                
                                                
-- stdout --
	* [false-721629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:45:01.599152 1445019 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:45:01.599269 1445019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:45:01.599307 1445019 out.go:374] Setting ErrFile to fd 2...
	I1217 01:45:01.599320 1445019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:45:01.599580 1445019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
	I1217 01:45:01.600005 1445019 out.go:368] Setting JSON to false
	I1217 01:45:01.600892 1445019 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26852,"bootTime":1765909050,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1217 01:45:01.601005 1445019 start.go:143] virtualization:  
	I1217 01:45:01.605035 1445019 out.go:179] * [false-721629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:45:01.608237 1445019 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:45:01.608295 1445019 notify.go:221] Checking for updates...
	I1217 01:45:01.612218 1445019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:45:01.615389 1445019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
	I1217 01:45:01.618444 1445019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
	I1217 01:45:01.622477 1445019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:45:01.625514 1445019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:45:01.629030 1445019 config.go:182] Loaded profile config "kubernetes-upgrade-916713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1217 01:45:01.629140 1445019 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:45:01.675265 1445019 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:45:01.675388 1445019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:45:01.745890 1445019 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:45:01.736115128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:45:01.745996 1445019 docker.go:319] overlay module found
	I1217 01:45:01.749136 1445019 out.go:179] * Using the docker driver based on user configuration
	I1217 01:45:01.752021 1445019 start.go:309] selected driver: docker
	I1217 01:45:01.752044 1445019 start.go:927] validating driver "docker" against <nil>
	I1217 01:45:01.752058 1445019 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:45:01.755475 1445019 out.go:203] 
	W1217 01:45:01.758362 1445019 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1217 01:45:01.761137 1445019 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-721629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:36:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-916713
contexts:
- context:
cluster: kubernetes-upgrade-916713
user: kubernetes-upgrade-916713
name: kubernetes-upgrade-916713
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-916713
user:
client-certificate: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.crt
client-key: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-721629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721629"

                                                
                                                
----------------------- debugLogs end: false-721629 [took: 3.340525129s] --------------------------------
helpers_test.go:176: Cleaning up "false-721629" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-721629
--- PASS: TestNetworkPlugins/group/false (3.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-859530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1217 01:49:50.422558 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-859530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m9.48036318s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m20.997378116s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-859530 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dd0def66-766a-4bad-a168-b0764f1f699c] Pending
helpers_test.go:353: "busybox" [dd0def66-766a-4bad-a168-b0764f1f699c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dd0def66-766a-4bad-a168-b0764f1f699c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003633734s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-859530 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-859530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-859530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.167081182s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-859530 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-859530 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-859530 --alsologtostderr -v=3: (12.371710092s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-859530 -n old-k8s-version-859530
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-859530 -n old-k8s-version-859530: exit status 7 (73.322391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-859530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-859530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-859530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (54.361864526s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-859530 -n old-k8s-version-859530
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-069646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1b0c5846-be5e-4997-ac67-2d5187d6f814] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1b0c5846-be5e-4997-ac67-2d5187d6f814] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003130794s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-069646 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-069646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-069646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059190578s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-069646 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-069646 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-069646 --alsologtostderr -v=3: (12.228251124s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-sd5gq" [c124ba37-f1ad-4141-b42c-3bffd8ec585b] Running
E1217 01:51:56.877812 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003056053s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646: exit status 7 (80.149649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-069646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069646 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (54.259398246s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-sd5gq" [c124ba37-f1ad-4141-b42c-3bffd8ec585b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00375928s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-859530 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-859530 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-859530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-859530 --alsologtostderr -v=1: (1.128321122s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-859530 -n old-k8s-version-859530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-859530 -n old-k8s-version-859530: exit status 2 (514.639603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-859530 -n old-k8s-version-859530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-859530 -n old-k8s-version-859530: exit status 2 (498.359272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-859530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-859530 --alsologtostderr -v=1: (1.187996996s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-859530 -n old-k8s-version-859530
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-859530 -n old-k8s-version-859530
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m21.995774902s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-sbx6r" [408c5e77-04d9-44c5-8845-a39a4ddfc5e3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003385141s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-sbx6r" [408c5e77-04d9-44c5-8845-a39a4ddfc5e3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003870173s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-069646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-069646 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-069646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646: exit status 2 (321.623721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646: exit status 2 (360.374499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-069646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069646 -n default-k8s-diff-port-069646
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-608379 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d5140469-d63d-4d83-96ae-2fff0c9a9f83] Pending
helpers_test.go:353: "busybox" [d5140469-d63d-4d83-96ae-2fff0c9a9f83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d5140469-d63d-4d83-96ae-2fff0c9a9f83] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00378272s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-608379 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-608379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052600863s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-608379 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-608379 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-608379 --alsologtostderr -v=3: (12.131573322s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-608379 -n embed-certs-608379
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-608379 -n embed-certs-608379: exit status 7 (83.666769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-608379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1217 01:54:50.422717 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-608379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (53.029625271s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-608379 -n embed-certs-608379
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8hz4j" [eefbbecb-80e8-4984-a99e-2a52d75b587f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004010312s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8hz4j" [eefbbecb-80e8-4984-a99e-2a52d75b587f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00306056s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-608379 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-608379 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-608379 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-608379 -n embed-certs-608379
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-608379 -n embed-certs-608379: exit status 2 (353.698799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-608379 -n embed-certs-608379
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-608379 -n embed-certs-608379: exit status 2 (329.121057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-608379 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-608379 -n embed-certs-608379
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-608379 -n embed-certs-608379
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-178365 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-178365 --alsologtostderr -v=3: (1.34089671s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178365 -n no-preload-178365: exit status 7 (76.681878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-178365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-456492 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-456492 --alsologtostderr -v=3: (1.296204272s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-456492 -n newest-cni-456492: exit status 7 (68.600839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-456492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-456492 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (51.524155945s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-721629 "pgrep -a kubelet"
I1217 02:12:29.761206 1211243 config.go:182] Loaded profile config "auto-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9x8hn" [68754860-4aca-486a-b305-15d737a0227a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9x8hn" [68754860-4aca-486a-b305-15d737a0227a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00508273s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (50.666687906s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-42xkj" [3440235d-a175-4e88-8a69-8647674bf41d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00333176s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-721629 "pgrep -a kubelet"
I1217 02:13:56.879173 1211243 config.go:182] Loaded profile config "kindnet-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hv7km" [ffc66e9f-0318-47ec-97ef-77292adb5a6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hv7km" [ffc66e9f-0318-47ec-97ef-77292adb5a6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003120478s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m9.608314091s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-b98bg" [f9834f2f-5990-44e6-9d2f-90f900421d53] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003702261s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-721629 "pgrep -a kubelet"
I1217 02:15:43.990178 1211243 config.go:182] Loaded profile config "calico-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fvljj" [629583c8-5083-47b6-984c-7dcb49a6995b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fvljj" [629583c8-5083-47b6-984c-7dcb49a6995b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003499834s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.139870084s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-721629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8hhhn" [7e4f5741-762d-4590-b007-2fb3b86ef82c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8hhhn" [7e4f5741-762d-4590-b007-2fb3b86ef82c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003007151s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m11.092721104s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-721629 "pgrep -a kubelet"
I1217 02:18:54.648676 1211243 config.go:182] Loaded profile config "enable-default-cni-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fcqdq" [21807ddd-8c4c-4d8f-9d73-6127afeb9c58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fcqdq" [21807ddd-8c4c-4d8f-9d73-6127afeb9c58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003410598s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.370396203s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-b88z6" [a3da94fa-101e-4834-af59-941fc41dedd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00424135s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-721629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vtwfc" [d6d939d1-09a4-4a15-8ed6-a017aa504329] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vtwfc" [d6d939d1-09a4-4a15-8ed6-a017aa504329] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004556433s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-721629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1217 02:20:58.152910 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/calico-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-721629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.200831622s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-721629 "pgrep -a kubelet"
I1217 02:21:46.616697 1211243 config.go:182] Loaded profile config "bridge-721629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-721629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cxnhq" [69d85037-db17-44ba-9c1b-d18562bc11d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cxnhq" [69d85037-db17-44ba-9c1b-d18562bc11d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003934132s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-721629 exec deployment/netcat -- nslookup kubernetes.default
E1217 02:21:56.877590 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-721629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E1217 02:22:17.765517 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/custom-flannel-721629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 4.08
400 TestNetworkPlugins/group/cilium 3.91
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-080086 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-080086" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-080086
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-743315" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-743315
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-721629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:36:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-916713
contexts:
- context:
cluster: kubernetes-upgrade-916713
user: kubernetes-upgrade-916713
name: kubernetes-upgrade-916713
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-916713
user:
client-certificate: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.crt
client-key: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-721629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721629"

                                                
                                                
----------------------- debugLogs end: kubenet-721629 [took: 3.900782726s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-721629" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-721629
--- SKIP: TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-721629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-721629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:36:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-916713
contexts:
- context:
cluster: kubernetes-upgrade-916713
user: kubernetes-upgrade-916713
name: kubernetes-upgrade-916713
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-916713
user:
client-certificate: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.crt
client-key: /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/kubernetes-upgrade-916713/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-721629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-721629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721629"

                                                
                                                
----------------------- debugLogs end: cilium-721629 [took: 3.741066358s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-721629" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-721629
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard